report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
The National Adult Literacy Survey estimated that approximately 90 million American adults have deficient literacy skills. Of those, between 40 and 44 million adults—about 22 percent of the country’s adult population—have severe problems with literacy, defined as the ability to read, write, and speak English and compute and solve problems proficiently. An additional 50 million adults are likely to encounter some problems functioning in society and need improved literacy skills. The Adult Education Act (AEA) is administered by the Department of Education. The act represents the primary federal effort to alleviate problems in adult literacy and provides the basic legislative authority and largest source of federal funds for programs that benefit educationally disadvantaged adults. The act’s largest program is the Adult Education State-Administered Basic Grant Program (State Grant Program). In fiscal year 1995, federal funding for this program was $252 million, while state and local sources provided $890 million, or 78 percent of the program’s total budget. For the first 15 years of the State Grant Program (1966 to 1980), federal expenditures exceeded total state and local contributions; however, total state and local contributions have since surpassed federal expenditures. (See table I.2 in app. I for total annual expenditures.) Figure 1.1 compares federal expenditures with state and local expenditures since the AEA’s passage in 1966. Although total state and local contributions currently far exceed federal expenditures, federal dollars still total more than half the funds for adult education in almost half of the states. The contribution of each state relative to the federal contribution varies widely, depending on each state’s commitment to providing adult education services. For example, in fiscal year 1991, state and local contributions ranged from a low of 21 percent to a high of 96 percent; conversely, federal expenditures ranged from 4 to 79 percent. Since 1992, the AEA has restricted the federal share of each state’s expenditure to no more than 75 percent. (See table I.4 in app. I for further information on expenditures by state.) The AEA makes grants to states and requires that they be used in accordance with federally approved state plans. In developing their plans, states must assess the needs of adults, including educationally disadvantaged adults, and the capability of programs and institutions to meet those needs. The Department of Education annually makes its grants to states on the basis of the number of individuals in each state who are at least 16 years old, not enrolled in school, and lack a high school degree or General Educational Development (GED) credential. Local adult education providers then apply to the states for funds. Following are the three most common types of instruction offered under the State Grant Program: Adult Basic Education (ABE), which is instruction designed for adults functioning below the eighth grade level; Adult Secondary Education (ASE), which is instruction designed for adults functioning at the secondary level that may culminate in a high school diploma or may serve as preparation for the GED examination; and English as a Second Language (ESL), which is instruction designed to teach English to non-English speakers. (See table I.3 in app. I for further information on enrollment by instructional area.) Programs funded under the AEA are important in providing basic literacy skills needed by clients of federal employment training programs such as Perkins Vocational Education (VOC ED), administered by the Department of Education; Job Opportunities and Basic Skills (JOBS), administered by the Department of Health and Human Services; and the Job Training Partnership Act (JTPA), administered by the Department of Labor. Consequently, the AEA and employment training legislation require coordination among these programs to avoid duplication and enhance service delivery. The National Literacy Act of 1991 amended the AEA and authorized several new programs. Major provisions included the creation of the National Institute for Literacy, the establishment of state and regional literacy resource centers, and a requirement for the Department of Education to develop model indicators of program quality to guide states in developing their own indicators for improved program evaluation. The 104th Congress is considering legislation that would consolidate adult education and other programs and provide one or more block grants to states. The Senate bill would repeal most existing federal employment training programs, including the State Grant Program, and replace them with a single block grant. The House bill would also repeal most employment training programs but replace them with four block grants, including a separate grant for adult education and literacy programs. At the request of the Chairman and Ranking Minority Member of the former House Committee on Education and Labor and the Chairman and Ranking Minority Member of the former Subcommittee on Elementary, Secondary, and Vocational Education, we reviewed several issues related to the AEA. Specifically, we examined the goals of the AEA and its largest program (the State Grant Program), the population served by the program, program services, and its coordination with federal employment training programs and the extent to which the State Grant Program ensures accountability for program quality and results, including how states have implemented quality indicators. We focused our review primarily on the State Grant Program because it is the largest of the AEA’s funded programs. In fiscal year 1995, 83 percent of AEA funds were allocated to this program. To obtain nationwide information on the State Grant Program, we interviewed federal officials from the Department of Education. We also reviewed Department of Education data and recent national studies, including the National Evaluation of Adult Education Programs and the National Adult Literacy Survey. We selected three states for closer review: California, Connecticut, and Iowa. We selected these states because they provided some geographic dispersion and represented a range of (1) state and local financial commitments (as demonstrated by the percentage of matching funds each state contributes), (2) program size (as demonstrated by dollars and enrollment), and (3) ESL enrollment levels. Within each state, we visited at least two communities that we selected with the help of state adult education officials. We selected communities that represented different types of locales (urban, suburban, rural) and were involved in a variety of local coordination activities. To identify the goals of the AEA, including the State Grant Program, we reviewed federal legislation. To determine the populations served and services provided by the program, we reviewed Department of Education data and national studies. We also interviewed local adult education providers. To provide information on the coordination of AEA programs with employment training programs, we interviewed federal officials at the Departments of Education, Labor, and Health and Human Services, and held discussions with national experts, including representatives of the National Institute for Literacy. We also reviewed studies on coordination. In the states we visited, we met with state officials from adult education and JOBS, JTPA, and VOC ED programs. In Iowa and parts of California, where the community college system is the major adult education provider, we also met with state community college representatives. At local levels in the three states, we met with adult education providers as well as representatives of local employment training programs. To provide information on program accountability and quality, we interviewed Department of Education officials and held discussions with national adult education experts. We also reviewed the Department’s model indicators of program quality and studies on program accountability and quality issues. In addition, we interviewed state and local officials in California, Connecticut, and Iowa and reviewed program documents, including the quality indicators developed by these states. We conducted our work between November 1994 and August 1995 in accordance with generally accepted government auditing standards. The AEA is a broad and flexible act and its largest program, the State Grant Program, reflects this. The State Grant Program, the federal government’s primary adult education program, has many goals and enables people with a wide range of needs to receive instruction from a variety of service providers. Many clients of employment training programs are among those needing the basic skills taught by adult education. Although the program has some restrictions, it allows states considerable flexibility in the types of instruction they fund with their federal grants as long as they fund programs in accordance with federally approved state plans. In keeping with their state plans, which call for coordinating adult education with employment training programs, a variety of coordination activities were taking place in the states and communities we visited. Recognizing the wide range of adult literacy needs in this country, the Congress passed the AEA with broadly stated goals. Although adult education programs are commonly viewed as the means to obtain a high school diploma or its equivalent, the AEA established goals that are far broader and include citizenship and employment as well as the overall improvement of the adult education system. Specifically, the purpose of the AEA is to improve educational opportunities for adults who lack literacy skills necessary for effective citizenship and productive employment; expand and improve the current adult education delivery system; and encourage the establishment of adult education programs for adults to (1) acquire basic skills needed for literate functioning, (2) acquire basic education needed to benefit from job training and obtain and keep productive employment, and (3) continue their education to at least the secondary school level. Adult education students have diverse needs, circumstances, and personal characteristics. A student might be a high school dropout, a client in a job training program, an immigrant or refugee, a displaced worker or homemaker, an adult in the workplace, a welfare recipient, or a retiree. Students who enroll in adult education classes vary in age, race and ethnicity, and employment status. In one program we visited in a rural California town, a 23-year-old refugee was enrolled in an Adult Secondary Education (ASE) class. He was a machine operator with an eleventh grade education who planned to earn his GED and become a bilingual teacher. A 45-year-old unemployed mother of four with a third grade education was enrolled in an English as a Second Language (ESL) class. Having done seasonal work in the past, her goal was to obtain a GED and find work in the nursing field. In an urban Connecticut program, a 19-year-old mother on welfare wanted to complete high school and become a cosmetologist. Although she had a tenth grade education, she needed the basic skills taught in an Adult Basic Education (ABE) class. In the same program, a 62-year-old immigrant who had been an accountant in Russia was attending ESL classes. Her goal was to become a U.S. citizen. In a suburban program in Connecticut, a 28-year-old part-time stock clerk was enrolled in an ASE class. He had a ninth grade education and lived with his parents. He hoped to earn his GED, attend college, and become a police officer. In a rural town in Iowa, a 48-year-old father of four from Laos spoke no English and was enrolled in a beginning ESL class. He worked part-time as an upholstery worker but hoped to learn English well enough to get a full-time job. In a city in Iowa, a married, 35-year-old mother of three was enrolled in an ABE class. A former welfare recipient, she had a job as a child care aide that was contingent upon her earning a GED. Her goal was to earn the GED and keep her job. National statistics also suggest that adult education students are fairly diverse. Nationwide, 38 percent of students enrolled in adult education classes in 1993 were between the ages of 16 and 24, 46 percent were between the ages of 25 and 44, and the remaining students were 45 years old or older, according to the Department of Education. Also, 36 percent of the students were white, 31 percent Hispanic, 18 percent black, 14 percent Asian or Pacific Islander, and 1 percent American Indian or Alaskan Native. The National Evaluation of Adult Education Programs conducted a survey of students who entered the adult education system between April 1991 and April 1992. It found that 42 percent of these students were employed, and 58 percent were either unemployed or not in the workforce when they enrolled. During the year before enrollment, 43 percent of ABE students, 31 percent of ASE students, and 14 percent of ESL students received public assistance or welfare payments. Many clients of federal employment training programs rely on the State Grant Program for the basic skills they lack. According to the Department of Labor, unless an attempt is made to upgrade the literacy skills of clients in federal employment training programs, clients’ success may be limited and access to the job market may be denied. Nationwide, almost 30 percent of JTPA clients are school dropouts, and as many as half may lack basic skills. One-fourth of JOBS clients in fiscal year 1992 were enrolled in a high school completion program. Adult education enrollment has risen almost every year since 1966, and the Congress is considering welfare reform proposals that may place even greater demands on adult education providers. These proposals may also make the coordination among adult education, welfare, and employment training programs even more critical. Some states are already implementing their own welfare reform efforts that require certain welfare clients to obtain adult education or employment training services to receive assistance. For example, California’s JOBS program requires that welfare recipients have opportunities to remedy basic skill deficiencies and earn a high school diploma or GED credential. The state is currently required to provide adult education to its JOBS clients with low assessment scores and to continue to provide education until clients attain a specified level of proficiency. Connecticut has piloted a welfare reform program that targets certain welfare recipients. Individuals in the pilot can receive needed remedial services, such as adult education or vocational training, for 2 years before being required to find jobs. According to a state official, the pilot was limited to two communities because of concerns about the state’s ability to provide remedial services to all needy individuals, particularly adult education services. If a client has only 2 years to seek remedial services and faces a waiting list for adult education services, both the client and the entire program are at risk, explained the official. Similarly, Iowa’s JOBS program has a goal of moving people off welfare within 2 years by providing remedial education and employment training. This welfare reform effort has increased the percentage of welfare recipients required to participate in the JOBS program from 24 to 88 percent. This increase is achieved, in part, by exempting fewer welfare recipients from participating in the JOBS program. For example, only parents with children under 6 months of age are exempt; previously, parents with children under the age of 3 were exempt. Under the State Grant Program, states may fund local educational agencies and a variety of public or private nonprofit agencies,organizations, and institutions to provide adult education classes. Most programs are administered by local educational agencies. Figure 2.1 shows the extent to which different organizations provide adult education. Many adult education providers use flexible and, in some cases, less traditional approaches to education that may better suit the responsibilities and needs of adult students. For example, to make classes more accessible to adults, providers may offer both day and night classes. Unemployed adults may prefer daytime classes; adults who work or have child care responsibilities may only be able to attend night classes. We also found that some programs offer on-site child care, which can make it easier for parents to attend adult education classes. The flexible “open-entry/open-exit” feature of most adult education providers may also better suit their students’ lives than the traditional September-to-June school year. The National Evaluation of Adult Education Programs found that 66 percent of adult education programs allowed students to enroll and begin instruction at any time. Service providers use a variety of instructional methods to meet students’ needs. For example, the principal of an adult school in a rural California town explained that her program provides “individualized instruction,” which means that teachers assess the individual goals and abilities of the students and take these into consideration in planning classroom instruction. Several methods or a combination of methods may then be employed: large group lectures or presentations; small-group instruction, including role play or practice in conversation or writing skills; or one-on-one tutoring if the ratio of aides to students permits it. Some programs encourage adults who need both basic skills and employment training to enroll in both concurrently; others recommend basic skills training first so that students have the necessary foundation for employment training. Concurrent enrollment, some state and local officials argue, may enhance learning and move adults into the workforce faster. The AEA limits states’ flexibility in determining how to spend their State Grant Program funds by specifying how a significant portion of the funds are to be spent. However, with the remaining unrestricted funds, the combinations and types of instruction states fund vary greatly. The AEA specifies that states must spend at least 15 percent of their grants on teacher training and program innovation and at least 10 percent on programs serving incarcerated or institutionalized adults. No more than 20 percent can be spent on programs for certificates of high school equivalency, and no more than 5 percent can be spent on state administration. States can decide on the types and combinations of instruction they wish to fund as long as they meet the AEA’s set-asides and fund programs in accordance with their state plans. Most providers offer the three most common types of instruction—ABE, ASE, and ESL. Enrollment in each varies greatly by state and community. Table 2.1 shows how enrollment levels vary by instructional area nationally as well as in the three states we visited. In keeping with state plans, which call for coordinating adult education with employment training programs, a variety of coordination activities were taking place in the states and communities we visited. These activities, however, were not easy to establish. They took time to develop and often depended on the perseverance of agency staff and local service providers. State and local coordination efforts included pooling funds, establishing one-stop centers, and developing uniform assessment systems. Connecticut has pooled funds from many sources for Coordinated Education and Training Opportunities grants. When a service provider receives a grant, it may contain funds from one or more funding sources. These coordinated grants are implemented through regional workforce development boards responsible for a range of tasks, including identifying local needs, evaluating grant proposals, and overseeing operations. These grants have the advantage of allowing service providers to deal with a single planning process and a single request for proposal. However, according to officials of one regional workforce development board, although these grants may make things “seamless” for the client, the service provider still must meet all the federal reporting requirements of their many funding sources. In Iowa, the community colleges coordinate funds from the State Grant, JTPA, and VOC ED programs. Each of the state’s 15 community colleges administers the State Grant Program and offers adult education classes. In addition, half of the colleges administer JTPA programs. Services from these many programs are often administered by staff who are both located at the college and operate within the same department. This arrangement facilitates coordinated program planning, service delivery, and referral of clients to multiple programs. At one college, administrators from the adult education, JTPA, and VOC ED programs told us that their close proximity enabled them to review a client’s total needs and provide the maximum allowable services. For example, a welfare recipient might receive adult education instruction from the State Grant Program, a clothing allowance from the JTPA program, and a transportation subsidy from the JOBS program. Some of the coordination between California’s JOBS program and State Grant Program takes the form of financial support from many agencies. Adult education programs that serve the state’s JOBS clients receive adult education and JOBS funds and may also draw funds from an 8-percent set-aside of JTPA funds for education programs that are matched by the state. In some counties, the state’s JOBS program pays for adult education programs to meet JOBS’ data collection and reporting requirements, which include administering the same competency-based assessment to all state JOBS clients. Adult education does not cover these costs. One adult education principal told us that, if not for the JOBS program’s covering these costs, program staff would not be able to do as much record keeping or assessment as they do. All three states we visited had begun efforts to establish one-stop centers. These centers are intended to help clients who need services from many programs find all of the services they need in one location or go to a single location to access information about the services they need. Officials in one community spoke of the administrative burden imposed by the multiple federal program requirements of establishing a one-stop center. All three states recently received grants from the Department of Labor to pilot one-stop centers, which are being established around the country even in places that have not received Department of Labor grants. The one-stop center we visited in California was established without a grant from Labor. The Department of Labor’s one-stop grants support voluntary state coordination. All three states considered adult education important and, thus, included adult education officials in planning their efforts. With the help of the Department of Labor’s one-stop grant, Iowa recently opened its first one-stop center to provide services to clients of employment training programs. Adult education staff were on site to perform client intake and assessment. Adult education instruction has been offered on site since February 1995. Connecticut was using its Department of Labor grant to develop one-stop centers, where client intake and evaluation would take place and where clients could be referred to multiple agencies for services they need. Local officials in one community said their goal was to develop three centers and install computers in libraries, bus terminals, and shopping malls so the public could access information on local services, such as adult education classes. One community we visited in California set up a one-stop center without a Department of Labor grant. To prevent unnecessary duplication of services and facilitate successful completion of training and the transition to employment, this center established linkages to more than 100 agencies and businesses. Features of the center included a central information line, career library, computerized career assessment, and on-site employment interviews. A single assessment system used across state agencies can facilitate coordination and make access to services easier for clients. Using one system allows clients to move easily among education and training programs, provides a common assessment vocabulary so that all agencies can determine initial client proficiency levels as well as ongoing progress, and minimizes duplicative or unnecessary testing of clients. However, not all adult education and employment training officials agree that a single assessment system can appropriately measure adults’ skills. To varying degrees, Connecticut and California were using common assessments. Connecticut required that adult education, JTPA, and JOBS programs all use the same assessment system. California’s JOBS program uses the Comprehensive Adult Student Assessment System (CASAS) for assessing its clients, but the state’s adult education program uses CASAS only on a sample of programs. Iowa was piloting CASAS but only for use by adult education providers. Measuring results in the State Grant Program has proven difficult because program objectives have not been clearly defined and questions exist about the validity and appropriateness of student assessments and the usefulness of nationally reported data on results. Although the Department of Education has focused on developing model program indicators that states could use to evaluate local programs, experts and program officials disagree about whether the indicators alone will enhance accountability. Efforts to enhance the evaluation capabilities of state agency staff and improve data collection continue, but it is too early to assess their impact. Evaluating program results depends on clear program objectives as well as criteria for measuring the achievement of those objectives. The broad objectives of the State Grant Program give the states the flexibility to set their own priorities but, some argue, they do not provide states with sufficient direction for measuring results. Moreover, reaching a consensus on measurable objectives for adult education is difficult. Because the State Grant Program’s objectives are so broadly defined, state officials have developed a variety of views on measuring program results. For example, some officials told us that they might measure program success by whether adults gained the skill to read to their children and, thus, contribute to their children’s literacy. Others might focus on whether adults can read street signs or the newspaper. And, in one state we visited, an official contended that completing high school and finding productive work should be the objectives of the states’ adult education programs because completing a basic skills program and becoming a citizen are no longer sufficient to succeed in society. Several experts and program officials told us that the State Grant Program lacks a coherent vision of the skills and knowledge adults need to be considered literate. Similarly, some state officials said that they would like the federal government to further specify the types of results expected from state adult education programs. Reaching consensus on measurable objectives, however, may be difficult since research findings are often inconclusive about the long-term benefits to adults of achieving various program results. For example, many adult education programs focus on preparing adults to take the GED examination as a means of high school completion. Yet research findings are mixed about whether GED attainment reflects increased literacy skills and whether GED recipients are economically better off than high school dropouts. Ensuring accountability has also been hampered by limitations in the assessment instruments used to measure student outcomes in adult education programs. The research literature raises questions about the validity of standardized tests used to measure adult literacy, and local program staff have questioned the appropriateness of using these assessments to measure program results. The AEA requires states to gather and analyze standardized test data as one way of evaluating local programs. These assessments tend to focus on either academic skills or functional literacy. Academic tests, such as the Tests of Adult Basic Education (known as “TABE”), focus on measuring such basic skills as reading comprehension, vocabulary, language expression, and mathematical proficiency. Functional literacy or competency-based tests, such as the Comprehensive Adult Student Assessment System (CASAS), focus on the ability to perform literacy-related tasks in situations faced by adults in everyday life at home, at work, or in the community. Experts have questioned the validity of both the academic and functional literacy tests used in adult education programs. For example, two recent reviews point to a lack of normative data for the age ranges of participants in most adult education programs. Functional literacy tests may lack validity because they are not derived from theoretical models of ability but from everyday literacy tasks. According to a recent review, without further analyses, the instructional implications of test performance are unclear. Thus, these assessments may not provide useful information about the skills and needs of adult students. A more serious problem affecting the validity of assessments is the lack of research examining the long-term retention of learning gains in adult education programs. According to one researcher, a comprehensive search did not uncover a single published study on the effectiveness of adult education programs in helping adults retain the skills they may have acquired during instruction. This being the case, improved test scores may not necessarily mean that adults will be better equipped for high- skilled jobs, function better as parents, or participate more fully as citizens. However, officials in the three states we visited felt that competency-based assessment systems could be useful in measuring progress in local adult education programs and, thus, strongly advocated these systems. California had developed CASAS and required its use in a sample of one-third of its adult education programs. Connecticut had designed its own competency-based testing system (adapted from CASAS) and required its use in all adult education programs. Iowa had recently decided to move toward a competency-based system and was piloting CASAS in 9 of its 15 community college districts. Local adult education and employment training staff had mixed views about their states’ competency-based assessments. Some local program staff saw the CASAS assessment system as a valuable and flexible tool. However, some English as a Second Language (ESL) teachers were dissatisfied with the CASAS test as a measure of how well adult education students learned to communicate in English. And some employment training staff said that the CASAS test did not give them sufficiently specific information about their clients or focused too much on life skills. Finally, several local staff questioned the appropriateness of CASAS as the sole assessment tool and, therefore, used CASAS in conjunction with other tests. Administrators and experts also told us that they thought no single test could measure all relevant aspects of student performance. The poor quality of the data on adult education students collected at state and local levels also hampers accountability. Federal and state officials as well as recent studies have cited problems with these data. The studies have attributed difficulties in obtaining accurate data to the sporadic attendance patterns of adult students and the limited time and expertise of local adult education program staff. State officials are required to submit to the Department of Education annual statistical performance reports that include information on students served by local programs. State-submitted reports include (1) the number of students served and their demographic characteristics, (2) the skill levels of students when they start adult education programs, (3) student progress over the program year, (4) eight types of student achievements, and (5) the number of students who do not complete their objectives and their reasons for separation. The reports also include information on program staff and the types of instructional settings in which students are served. Department of Education officials acknowledged serious problems with the quality of the statistical reports, some of which are based on double counting or undercounting of students in adult education programs. Another Department official charged that many of the data are questionable and that very few local programs have record systems that allow them to report the data the Department requires. Comments of officials in one state confirmed these data problems. They said that they did not have all the information the Department requires for their statistical reports because too many resources are required to collect the data. As a result, they simply do not report some of the data elements and provide estimates of the other information. They noted that the data they submit need not be certified and that the Department has never audited their statistical reports. Furthermore, they asserted that these data have nothing to do with receiving federal funds. The only thing that really counts, they said, is the number of adults in the state who do not have diplomas because that is what drives the funding formula. Also, some local staff failed to see the utility in collecting the data that states require for reporting to the federal government. Some said they thought that the information they are required to report does not accurately reflect the accomplishments of their adult education students. Difficulties in obtaining accurate data can also be attributed to attendance patterns of adult students and the limited capacity and expertise of local program staff. The open-entry/open-exit feature of many programs adds to the difficulty of tracking adult students. Because students may not stay in the program long or may attend on a sporadic basis, program staff do not always have sufficient information to report on student progress or results. Because local programs have difficulty following up on students, program officials may rely on information reported by teachers or the students themselves. In addition, many local programs lack sufficient staff to handle data collection and reporting responsibilities, according to a survey of adult education programs in nine states. Programs are typically staffed by part-time personnel, and these responsibilities become an extra burden. Also, local program staff may lack expertise in collecting assessment data that can help track program effectiveness. For example, when the National Evaluation of Adult Education Programs asked local adult education program staff to provide certain assessment data, it found that about one-third of the information was invalid because (1) the wrong test forms were used, (2) data were inaccurately recorded, or (3) tests were administered at the wrong times. Similarly, as Connecticut began to implement a new assessment system statewide, administrators discovered that they needed to clarify program guidance because some local programs were mistakenly measuring literacy gains using a test designed solely for student placement. Federal efforts to improve quality and accountability have focused on (1) developing model indicators; (2) providing technical assistance to states and local programs on data collection, assessment, and developing performance standards and measures; and (3) requiring states to set aside funds for training and demonstration projects. Provisions of the National Literacy Act focus on improving quality in adult education programs by requiring the Secretary of Education to develop indicators of program quality. The indicators were to be used as models for judging state and local programs receiving federal funding. States were also required to develop and implement their own indicators, which might or might not correspond to the federal model, and use them to evaluate state and local programs. The Department of Education developed model indicators by (1) reviewing adult education indicators already being developed by various states and indicators used by other federal programs, (2) meeting with experts and adult educators, (3) commissioning background papers by experts in the field, and (4) conducting workshops for state directors who would be responsible for developing and implementing the state indicators. The resulting eight model indicators of program quality are listed in table 3.1. The indicators cover student outcomes, that is, learner progress toward attainment of basic skills and competencies and learner advancement in the program. They also focus on recruiting and retaining adult education students and other indicators of program quality—planning, curriculum and instruction, staff development, and provision of support services. The Department did not attempt to set performance standards for adult education programs but limited its work to developing indicators and providing some sample measures for each indicator. The Department defined an indicator as a variable that reflects effective and efficient program performance. It is to be distinguished from a specific measure used to determine the quantitative level of performance for the indicator. For example, to measure learner progress, states could use standardized test score gains, teacher reports of gains in communication competencies, or alternative assessment methods (such as portfolio assessments, student reports of attainment, or improvements in specific employability or life skills). An indicator is also to be distinguished from a performance standard, which defines acceptable performance in terms of a specific numeric criterion. The National Literacy Act also required states to adopt indicators by July 1993 and use them to evaluate local programs. States were required to adopt, at a minimum, indicators for recruitment, retention, and student learning outcomes. However, decisions about whether to adopt the Department’s model indicators, what measures to use, and whether to develop performance standards were left to the states. A review of amendments to state adult education plans submitted in July 1993 showed that for the most part states had adopted indicators similar to the Department’s model, especially in the areas of student outcomes, recruitment, and retention. However, states were less consistent in how they measured indicators. The review found that states were using different standardized tests to measure learner progress and had defined learner advancement in different ways. A 1995 survey of state adult education directors showed that 16 states had implemented standards and 8 states had developed but not yet implemented standards. Each of the three states we visited had developed standards for student outcomes, but not all of these standards were readily quantifiable. California had developed standards for seven levels of language proficiency for ESL students (the majority of adult education students in the state) but had not yet quantified performance on specific assessment measures. Standards for other kinds of adult education students in California had not yet been completed. Connecticut had set standards for educational gains expected over a specific time period and measured their achievement using test scores and the number of course credits or competencies attained. Iowa had set standards for grade level increases on standardized tests and for the performance of GED graduates on the GED exam. Since Iowa had not yet determined a specific strategy for competency-based education, the state had not yet established standards for competency-based tests. Experts as well as federal and state officials with whom we spoke disagreed about whether developing indicators would improve accountability and program quality. Some were concerned that the indicators do not move the field forward because they do not specify the types of results the federal government expects from state and local programs. One federal official doubted whether the indicators alone would help state and local programs collect higher quality data. However, other experts told us that they thought the indicators were a good first step. Still others said that the federal government should not be setting standards for states because states’ literacy problems and clientele differ. It is too soon to tell whether state-developed indicators, measures, and performance standards will result in the collection of more useful data or help states evaluate local programs since the 1993-94 program year was the first year in which indicators were to be used for evaluation. One state we visited planned to use information collected during the 1993-94 program year as baseline data and begin to hold local programs accountable for performance on the state’s indicators in subsequent years. Other federal efforts have been initiated to help states develop better accountability systems. Two of these efforts are designed to help build the capacity and expertise of state adult education staff to evaluate local programs. In 1993, the Department hired a contractor for a 3-year technical assistance effort designed to assist state education agencies with assessment, evaluation, and the development of performance standards and measures. And, in 1993, the National Institute for Literacy awarded grants to five states to develop performance measurement systems for literacy, with a specific focus on integrating systems used by different agencies that provide literacy services. Department officials also told us that they were acting to improve the quality of data collected on adult education programs. In concert with state adult education directors, the Department has been examining whether to modify the existing federal reporting requirements. They have held several meetings but have not yet issued any recommendations. In addition, the Department has developed and tested an automated management information system that would allow programs to collect data on individual students and a computer program that would help states more easily convert data they collected to the statistical reports required by the Department. A field test of the management information system in selected local programs in five states revealed that local staff appreciated the system’s report-writing capabilities but remained highly resistant to performing data collection and entry. In addition to these efforts, the requirement that states set aside a portion of their federal funds for demonstration projects and training may also help states move toward better accountability systems. Although the Department has not completed an ongoing national evaluation of the use of these funds, state officials asserted that the set-aside was critical to their efforts to improve program quality. All three states had used these federal funds, in part, to develop competency-based instruction and assessment systems; they had also used the funds to address state-specific issues. California had developed a training institute for ESL teachers, Connecticut had used some of the funds to help implement a new statewide management information system, and Iowa had held a state literacy conference to examine how to better measure adult student progress through qualitative assessments. The broad goals and flexibility of the AEA and its State Grant Program have resulted in a federal program that is serving many different populations, yet has difficulty determining its target populations, objectives, or a means to measure program results. Although the broad goals and corresponding flexibility give state and local officials the latitude to design programs and quality indicators tailored to their particular needs and priorities, some state officials and experts have voiced concerns that the federal government has not provided sufficient vision and guidance. This poses a challenge for developing accountability measures. The program has had difficulty ensuring accountability for results—that is, being able to clearly or accurately say what program funds have accomplished. Although the Department of Education relies on federal reporting requirements and program quality indicators to provide this information, the data the Department receives are of questionable value. Because state and local client data are missing or inaccurate, attempts to make the program accountable may be compromised. Until further guidance is developed on measurable objectives and ensuring the quality of client data, state-developed indicators and standards are unlikely to improve accountability. In its written comments on a draft of the report, the Department of Education recognized that we identified the three areas that are critically important to improving accountability in adult education: clear purpose and expectations, good assessment instruments, and high-quality data. The Department also stated its commitment to improving program accountability through several current initiatives. These initiatives include developing an individualized student record keeping system; moving toward an outcomes-based national data collection system; conducting evaluations of delivery systems, effective practice, assessment, and performance measurement; providing technical assistance in designing and using performance measures and standards; and developing training programs for adult education staff in collecting, analyzing, and reporting student and program data. (The Department’s letter appears in app. II.) | Pursuant to a congressional request, GAO provided information on the Adult Education Act's (AEA) State Grant Program, focusing on: (1) its coordination with federal employment training programs; and (2) the extent to which the program ensures accountability for program quality and results. GAO found that: (1) AEA goals are broad so that people with diverse backgrounds can have access to various types of educational instruction; (2) the most common programs funded under the State Grant Program include basic education, secondary education, and english as a second language programs; (3) the State Grant Program has had difficulty ensuring accountability for program results due to a lack of clearly defined program objectives, questionable adult student assessments, and poor student data; (4) coordination among the State Grant Program and federal employment training programs is essential, since many individuals need instruction provided by both of these programs; and (5) some experts disagree whether developing model indicators of program quality will help states define measurable program objectives, evaluate local programs, and collect more accurate data. |
The United States experienced heavy aircraft and aircrew losses to enemy air defenses during the Vietnam War. Since then, the services have recognized air defense suppression as a necessary component of air operations. Consequently, when a crisis arises, suppression aircraft are among the first to be called in and the last to leave. Radar is the primary means used by enemy forces to detect, track, and target U.S. aircraft with missiles and guns. Hence, U.S. suppression aircraft focus on trying to neutralize, degrade, or destroy the enemy’s air defense radar equipment. U.S. suppression aircraft, using missiles and jammers, generally begin suppressing enemy air defenses after they begin emitting radio-frequency signals. Also, in some cases, aircraft launch antiradiation missiles that can search for and destroy enemy radars if they are turned on. At some risk to the aircraft and aircrews, suppression aircraft must be in the vicinity of the enemy air defenses to complete their mission. Enemy radars in the past were usually fixed in position, operated independent of each other, and turned on for lengthy periods of time—all of which made them relatively easy to find and suppress through electronic warfare or physical attack. Such was the case in Operation Desert Storm, when suppression aircraft such as EA-6B and the now-retired EF-111 and F-4G played a vital role in protecting other U.S. aircraft from radar-guided missile systems. In fact, strike aircraft were normally not permitted to conduct air operations unless protected by these suppression aircraft. The EA-6B and EF-111 were equipped with transmitters to disrupt or “jam” radar equipment used by enemy surface-to-air missiles or antiaircraft artillery systems. The F-4G, F/A-18, and EA-6B used antiradiation missiles that homed in on enemy radar systems to destroy them. The Air Force replaced the F-4G with a less capable aircraft, the F-16CG, but did not upgrade or replace the EF-111. According to DOD, countries have sought to make their air defenses more resistant to suppression. These efforts include increasing the mobility of their surface-to-air missiles and radar equipment, connecting radars together into integrated air defense systems, and adding sophisticated capabilities so that the radar can detect aircraft while turned on for a shorter period of time. These defenses use various means to track and target aircraft, including modern telecommunications equipment and computers to create networks of early warning radar, missile system radar, and passive detection systems that pick up aircraft communications or heat from aircraft engines. Integrated networks provide air defense operators with the ability to track and target aircraft even if individual radar elements of the network are jammed or destroyed. Since the end of Desert Storm in 1991, U.S. suppression aircraft have been continuously deployed to protect fighter aircraft maintaining the no-fly zones over Iraq. More recently, these aircraft have been deployed to Yugoslavia and Afghanistan. In 1999, during Operation Allied Force in Yugoslavia and Kosovo, these aircraft were extremely important for protecting strike aircraft from enemy radar-guided missiles. However, according to the Defense Intelligence Agency, these aircraft were unable to destroy their integrated air defense system because Yugoslav forces often engaged in elaborate efforts to protect their air defense assets. These efforts reduced Yugoslav opportunities to engage U.S. and coalition aircraft because their air defense assets could not be used and protected simultaneously. Nevertheless, in two separate incidents, Yugoslav forces managed to shoot down an F-117 stealth fighter and an F-16CG. In addition to the two losses, the inability of the United States to counter Yugoslav air defenses that included radar and infrared guided missiles made it necessary for U.S. forces to (1) fly thousands of dedicated suppression missions, pushing suppression forces in Europe to their limits, and (2) raise their strike missions to higher altitudes or keep low-flying aircraft such as the Army’s Apache attack helicopters out of combat to reduce risk from infrared missile threats. DOD now primarily uses Navy and Marine Corps EA-6Bs for radar jamming and Air Force EC-130s for communications jamming. Recently, EA-6Bs and EC-130s saw combat in Operation Enduring Freedom in Afghanistan. Air defenses there were relatively weak compared to those faced by U.S. aircraft in Yugoslavia, placing fewer demands on suppression aircraft to jam air defense systems. This gave the EA-6B an opportunity to exploit new techniques to jam ground communications by working with the EC-130 and other electronic intelligence gathering aircraft. Since our January 2001 report, the services have had some success in improving their suppression capabilities, but they have not reached a level needed to counter future threats. When the Air Force retired the EF-111 without a replacement, the Navy’s EA-6B became DOD’s primary airborne radar jammer, providing suppression support for all the services. High demand for the aircraft has exacerbated current wing and engine problems, and the Navy has been unable to meet its overall requirements. Efforts are underway to address the EA-6B’s problems and improve its suppression equipment, but the Navy projects that the declining EA-6B inventory will be insufficient to meet DOD’s needs beyond 2009. The Air Force’s F-16CJ fleet has grown and the aircraft’s capabilities are being improved, but it still lacks some of the capabilities of the F-4G, the aircraft it replaced. Also, the Air Force and the Navy have improvements underway for other systems such as the EC-130 and antiradiation missiles but face funding challenges. Finally, to the extent there are gaps in suppression capabilities, U.S. fighter aircraft and helicopters must rely on self-protection equipment to suppress enemy air defenses, but some of this equipment has been proven to be unreliable. The services have some programs underway to improve this self-protection equipment, such as developing new towed decoys, but, as discussed below, these programs have been hampered by technical and funding issues. The Navy does not have enough EA-6Bs to meet DOD’s suppression needs due to wing fatigue and engine problems that have grounded aircraft; downtime required for routinely scheduled depot level maintenance; and, in the future, downtime to install major capability upgrades in the aircraft. Because of its limited numbers and high rate of use by the warfighting commanders, DOD designated the EA-6B as a “low density, high demand” asset to support worldwide joint military operations. EA-6Bs are included in all aircraft carrier deployments and support the Air Force’s Aerospace Expeditionary Forces. To meet a requirement to field 104 aircraft out of a total inventory of 124 (with an average age of 19 years), the Navy refurbished 20 retired EA-6Bs. Subsequently, in 2001, 2 EA-6Bs crashed, reducing the total inventory to 122 aircraft. Also in that year, the Navy planned to raise the requirement to 108 aircraft and establish an additional EA-6B squadron, but that has been delayed until March 2004. In February 2002, the Navy had only 91 EA-6Bs available for operations instead of the 104 required. As a result, while the Navy has been able to meet operational commitments, it has been unable to meet some of its training and exercise requirements. The Navy is currently taking action to remedy EA-6B wing fatigue and engine failures, and flight restrictions have been put in place. However, because wing fatigue has continued to grow, the Navy may have to ground additional aircraft. The Navy plans to replace a total of 67 wing center sections to remedy the problem, and it will spend $4.4 million each for such replacements for 17 aircraft in the fiscal year 2002 budget. In addition, DOD’s 2002 supplemental funds covered 8 additional wing replacements, and the Navy is programming funds for 10 more wing replacements for each year in the Future Years Defense Plan. In 2001, the Navy also began experiencing problems with the EA-6B’s engines. Premature failure of certain engine bearings caused some engines to fail, and it may have caused the crash of two aircraft in 2001. The Navy grounded over 50 engines until they could be overhauled, but it expects to have them back in service by late this year. The constant deployment of this “low density” EA-6B fleet for contingency operations has contributed to its deterioration and to other maintenance- related problems. For example, to maintain the readiness of squadrons deployed to Kosovo and other ongoing commitments, the Navy took spare parts and personnel from nondeployed squadrons and subjected the EA-6B to above average cannibalization of parts. This impacted the ability of nondeployed units to train and maintain aircrew proficiency. The constant deployments also added to personnel problems in terms of quality of life. EA-6B crews, for example, are often away from home for extended periods of time creating hardships for their families. Given the EA-6B’s age and high rate of use, the Navy says that even if the EA-6B fleet’s problems are remedied, it will be unable to meet force structure requirements in 2009, and all EA-6B aircraft will be out of the force by 2015. Therefore, the Navy says it needs a replacement aircraft to begin entering the force by 2009 if requirements are to be met. The Navy has been upgrading its EA-6B electronic warfare equipment over the years, and it is currently modifying its radar signal receiver and related equipment. The modification program, known as the Improved Capability Program (ICAP) III, provides improved radar locating and jamming capabilities to counter modern enemy air defense threats. As of January 2002, according to DOD, ICAP III engineering and manufacturing development was about 94 percent complete, and the modification began testing on the first aircraft in November 2001. The Navy expects ICAP III to reach initial operational capability in 2005 and to be installed on all EA-6Bs by 2010, about the time when the aircraft begins to reach the end of its service life. The Navy is considering using a modified version of the ICAP III equipment on whatever follow-on suppression aircraft are developed and fielded, and is also upgrading the EA-6B jammer pods to increase the number of frequencies that can be jammed. The Air Force is procuring 30 additional F-16CJ suppression aircraft to meet force structure requirements for the Air Force’s Aerospace Expeditionary Forces. In all, 219 F-16CJ aircraft will be available. To fully implement its concept of operations for the Expeditionary Forces, the Air Force also plans to increase the capability of the latest model F-16C/Ds (block 40) and the F-16CJs (block 50) to be used for both attack and suppression missions. To accomplish this, the F-16C/Ds will be modified to carry the HARM Targeting System, and the F-16CJs will be modified to carry the Advanced Target Pod. The HARM Targeting System will provide situational awareness to the F-16C/Ds and targeting information to the HARM missile to permit them to perform the suppression mission. The Advanced Target Pod will enable the F-16CJs to deliver precision-guided munitions. The Air Force recently upgraded the HARM Targeting System and is procuring additional systems. The upgrade (known as R-6) provides better and faster targeting information to the missile, but even with this pod the F-16CJ still lacks some of the capabilities of the retired F-4G. The Air Force completed the R-6 upgrade on fielded systems in December 2001 and systems subsequently produced will have it. Once 31 additional systems are delivered in 2002, the F-16CJs will have a total inventory of 202 systems, short of the Air Force’s original goal of having 1.1 systems per aircraft, or about 240 systems. Also, the Air Force has partially funded additional upgrades (called R-7) for the HARM Targeting System in 2003, and plans to fully fund the upgrade in the 2004 budget cycle, according to Air Force operational requirements officials. These officials also stated that they are considering funding for additional R-7 HARM Targeting Systems for F-16CJs and F-16C/Ds in the 2004 budget submission. The Air Force is also upgrading the capabilities of the EC-130 Compass Call Aircraft, which perform primarily communications jamming missions. The upgrades are intended to improve the aircraft’s jamming capabilities, reliability, and maintainability. The EC-130 is another “low density, high demand” asset with a total of only 13 operational aircraft, of which 11 are being funded for upgrade. Gaps in the services’ air defense suppression aircraft make it essential that other aircraft have the ability to protect themselves from enemy defenses. The services have already identified serious reliability problems with current self-protection systems on U.S. combat aircraft, including jammers, radar warning receivers, and countermeasures dispensers. Most of the current systems use older technology and have logistics support problems due to obsolescence. Also, as we reported last year, the self- protection systems on strike aircraft may have more problems than the services estimate. In reviewing test results using the new Joint Service Electronic Combat System Tester, we found that aircraft the services believed to be mission capable were not because of faults in their electronic combat systems that were undetected by older test equipment. The faults ranged from the identification of parts needing to be replaced inside the electronic combat systems, to the wiring, antennas, and control units that connect the systems to the aircraft. For example, 41 of 44 F-15C aircraft and 10 of 10 F-18C aircraft previously believed to be fully mission capable were subsequently found to have one or more faults in their self-protection systems, and 1 F-18C had 12 such faults. Coupled with the problems in the suppression aircraft, these shortcomings could create survivability problems for the aircraft should they encounter significant enemy air defense capabilities in some future conflict. The services have some programs underway to improve self-protection capabilities such as the joint Navy and Air Force Integrated Defensive Electronic Countermeasures (IDECM) system and the Precision Location and Identification (PLAID) system. The IDECM system will provide the F-15, F/A-18E/F, and B-1B aircraft with improved self-protection through jammers and towed decoys. The system has experienced some delays in engineering and development, and the estimated procurement cost has doubled. The PLAID system will provide aircrews with accurate location and identification of enemy air defense systems. The services expect to field both systems in 2004. The services have initiated additional research and development efforts to improve their ability to suppress enemy air defenses, but they face technology challenges and/or a lack of funding priority for many of these programs. The Miniature Air Launched Decoy (MALD), which an Air Force analysis has shown could make a significant contribution to aircraft survivability, illustrates this problem. MALD is supposed to mimic an aircraft and draw enemy air defenses away from the real aircraft. A recently completed Advanced Concept Technology Demonstration, it had been funded by the Air Force for an initial small procurement of 300 decoys, with potential for further procurement. According to the Air Force, after experiencing technical problems, MALD did not meet user needs, and its procurement cost estimates increased. Thus, the Air Force canceled the procurement and restructured MALD to address deficiencies highlighted in the demonstration. The Navy has been developing its own decoy, the Improved Tactical Air Launched Decoy (ITALD), but it has procured only part of its inventory objective. Despite recurring congressional increases for the past several fiscal years, the Navy has not submitted budget requests for ITALDs or procured units to complete its inventory objective because of competing priorities. Also, the Navy is upgrading the HARM missile used to attack shipborne and ground-based radars. The first phase of the upgrade improves missile accuracy by incorporating global positioning and inertial navigation systems into the missile. A second upgrade, the Advanced Anti-Radiation Guided Missile, will add millimeter wave capability to allow the missile to target radars that have stopped emitting. While the Air Force employs the HARM missile as well, it is not involved in the HARM upgrade program. DOD has acknowledged the gap in U.S. air defense suppression capabilities for some time and has conducted several studies to identify solutions, but it has had little success in closing the gap. Our past work and the work of others have cited the need for DOD to establish some coordinating entity to develop a comprehensive strategy that addresses this capability gap. In response to our previous report, DOD stated that its Airborne Electronic Attack Analysis of Alternatives would provide the basis for such a strategy. However, the analysis was limited to assessing options for replacing the EA-6B rather than assessing the needs of the overall suppression mission. Upon completion of the analysis, the Navy and the Air Force proposed options for replacing EA-6B capabilities, and DOD is currently evaluating these proposals for consideration in the 2004 budget submission. In fiscal year 2000, Congress expressed concerns that DOD did not have a serious plan for a successor to the EA-6B aircraft and directed DOD to conduct the Airborne Electronic Attack Analysis of Alternatives for replacing the EA-6B. DOD indicated in its response to our January 2001 report that the analysis would lead to a DOD-wide strategy and balanced set of acquisition programs to address the overall gaps between suppression needs and capabilities. However, it was only intended to address the airborne electronic attack aspect of the suppression mission and therefore did not address the acknowledged problems with aircraft self-protection systems or the technical and funding challenges of other service programs such as the Navy’s ITALD program, the Air Force’s MALD program, and the Air Force’s EC-130 modifications. The Navy took the lead on the joint analysis with participation by all the services. The analysis, completed in December 2001, concluded that the services needed a standoff system or a combination of systems to operate at a distance from enemy targets and a stand-in system that would provide close-in suppression protection for attacking aircraft where the threat is too great for the standoff systems. The analysis established the capabilities of the EA-6B upgraded with ICAP III as the foundation for any future system. It presented the Navy and the Air Force with detailed models of estimated costs and capabilities of 27 mixes of new and/or upgraded aircraft to consider for follow-on electronic attack capabilities but did not recommend any particular option. These options ranged in estimated 20-year life cycle costs from $20 billion to $80 billion. In conjunction with the analysis, the services formed a Joint Requirements Coordination and Oversight Group to coordinate operational requirements for airborne electronic attack, review ongoing and planned production programs for the mission, and exchange information among the services to avoid unnecessary duplication. A key activity of the group is to coordinate Navy and Air Force proposals for replacing the EA-6B. According to group members, this mechanism will help address airborne electronic attack needs through the coordination of complementary systems agreed to by the services. In June 2002, the services presented their proposals for follow-on capabilities to the Office of the Secretary of Defense. According to the services, the Navy proposed to replace the EA-6B with an electronic attack version of its new F/A-18E/F fighter and attack aircraft. The Air Force proposed adapting the B-52H bomber for standoff suppression by adding jamming pods to it, plus a stand-in suppression capability provided by a MALD-type decoy with jamming capabilities or an unmanned aerial vehicle equipped with jammers. The services see these proposals as a coordinated, effective solution to the near- and far-term needs for airborne electronic attack. DOD is currently conducting an additional analysis of the proposals, and the Secretary will decide later this year what proposals to include in the fiscal year 2004 budget submission. The development of systems to replace the EA-6B will help close the gap between DOD’s suppression capabilities and needs. However, the service proposals that are currently being considered by DOD do not provide an integrated, comprehensive solution to the overall suppression needs. In addition, while the Joint Requirements Coordination and Oversight Group provides a mechanism to coordinate the services’ efforts, it has not been directed to develop a comprehensive strategy and monitor its implementation. Other assessments have also pointed to the lack of a coordinated approach to addressing the gap in air suppression capabilities. At DOD’s request, the Institute for Defense Analyses studied problems in acquiring electronic warfare systems. The Institute found several causes for the problems, including uncertainties in characterizing rapidly changing threats and systems requirements, lack of adequate and stable funding, complexity of electronic warfare hardware and software, challenges in integrating the hardware and software on platforms, and difficulties in getting and keeping experienced electronic warfare personnel. Among other things, the Institute recommended that DOD establish central offices for electronic warfare matters in the Joint Chiefs of Staff and in each service, create a senior oversight panel, and prepare an annual electronic warfare roadmap to help correct some of the problems DOD faces in electronic warfare acquisition programs. While DOD has not established a coordinating entity to provide leadership for the suppression mission, it has recognized the need for such entities in other cross-service initiatives areas such as the development and fielding of unmanned aerial vehicles. In October 2001, the Under Secretary of Defense for Acquisition, Technology and Logistics established a joint unmanned aerial vehicles planning task force that will develop and coordinate road maps, recommend priorities for development and procurement efforts, and prepare implementing guidance to the services on common programs and functions. The air defense suppression mission continues to be essential for maintaining air superiority. Over the past several years, however, the quantity and quality of the services’ suppression equipment have declined while enemy air defense tactics and equipment have improved. DOD has recognized a gap exists in suppression capabilities but has made little progress in closing it. In our view, progress in improving capabilities has been hampered by the lack of a comprehensive strategy, cross-service coordination, and funding commitments that address the overall suppression needs. DOD relies on individual service programs to fill the void, but these programs have not historically received a high priority, resulting in the now existing capability gap. We continue to believe that a formal coordinating entity needs to be established to bring the services together to develop an integrated, cost-effective strategy for addressing overall joint air defense suppression needs. A strategy is needed to identify mission objectives and guide efforts to develop effective and integrated solutions for improving suppression capabilities. To close the gap between enemy air defense suppression needs and capabilities, we recommend that the Secretary of Defense establish a coordinating entity and joint comprehensive strategy to address the gaps that need to be filled in the enemy air defense suppression mission. The strategy should provide the means to identify and prioritize promising technologies, determine the funding, time frames, and responsibilities needed to develop and acquire systems, and establish evaluation mechanisms to track progress in achieving objectives. In written comments to a draft of this report, DOD concurred with our recommendations and supported the need for a mechanism to coordinate electronic warfare strategy and systems acquisition. DOD stated that the Office of the Secretary of Defense (Acquisition, Technology and Logistics) is currently restructuring its staff to address cross-cutting issues, including the creation of an Assistant Director of Systems Integration for Electronic Warfare and an Integrated Product Team process to formulate a comprehensive approach to the electronic warfare mission area, including defense suppression. We believe this is a good step forward. DOD also stated that we were overly critical in our characterization of individual defense suppression systems and failed to acknowledge its full range of capabilities to suppress air defenses. We recognize that the services have substantial capabilities but remain concerned because there are insufficient aircraft to meet overall requirements and improvements have not kept pace with evolving threats. Several service-specific attempts have been made to remedy the acknowledged gap in capabilities, but they have faltered in competition for funding. In some cases, Congress intervened with guidance and increases to services’ budget requests for defense suppression to ensure that DOD addresses the capabilities gap. We believe that creation of a comprehensive strategy and effective coordinating entity would strengthen DOD’s ability to compete for funding and address the gap. DOD’s comments are reprinted in appendix II. In addition, DOD provided technical comments that we incorporated into the report where appropriate. To assess the condition of DOD’s suppression capabilities and DOD’s progress in developing a strategy for closing the gap in suppression capabilities, we interviewed Office of the Secretary of Defense, Joint Chiefs of Staff, Defense Advanced Research Program Agency, Air Force, Army, Navy, and Marine Corps officials responsible for electronic warfare requirements and programs. We also interviewed service program managers for the EA-6B, EC-130, F-16CJ, HARM, aircraft self-protection systems, and programs under development. We also met with officials from selected EA-6B squadrons and an EA-6B maintenance depot. We interviewed Defense Intelligence Agency officials and reviewed related intelligence documents to ascertain the capabilities of current and future enemy air defense systems. We also discussed air defense suppression programs and issues with various DOD contractors, including RAND Corporation, Northrup-Grumman Corporation, General Atomics Aeronautical Systems, Incorporated, and Raytheon Systems Company. We reviewed pertinent DOD, service, and contractor documents addressing the status of suppression capabilities, plans for maintaining them, and potential solutions for closing the gap in capabilities. Specific locations we visited are listed in appendix I. We performed our review from October 2001 through August 2002 in accordance with generally accepted government auditing standards. As you know, the head of a federal agency is required under 31 U.S.C. 720 to submit a written statement of actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform not later than 60 days after the date of the report and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of the report. We are sending copies of this report to the Secretaries of the Army, Air Force, and Navy; the Commandant of the Marine Corps; and interested congressional committees. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me on (202)512-4841. Major contributors to this report were Michael Aiken, Gaines Hensley, John Oppenheim, Terry Parker, Robert Pelletier, and Robert Swierczek. Office of the Secretary of Defense, Washington, D.C. Joint Chiefs of Staff, Washington, D.C. Headquarters Elements, Air Force, Army, Marine Corps, and Navy, Washington, D.C. Defense Intelligence Agency, Washington, D.C. | U.S. military aircraft are often at great risk from enemy air defenses, and the services use specialized aircraft to neutralize or destroy them. In January 2001, GAO reported that a gap existed between the services' suppression capabilities and their needs and recommended that a comprehensive strategy was needed to fix the situation. In response to GAO's report, DOD emphasized that a major study underway at the time would provide the basis for a Department-wide strategy and lead to a balanced set of acquisition programs between the services. This report updates our previous work and assesses actions that DOD has taken to improve its suppression capabilities. The Department of Defense continues to face a gap between its need to suppress enemy air defenses and its capabilities to do so, despite some progress in upgrading its capabilities. There are not enough existing suppression aircraft to meet overall requirements, some aircraft are experiencing wing and engine problems, and improvements are needed to counter evolving threats. DOD's primary suppression aircraft, the EA-6B, is also reaching the end of its life cycle and a replacement is needed as early as 2009. Furthermore, some aircraft self-protection equipment, which provide additional suppression capabilities, have also been found to be unreliable. DOD has not yet developed an integrated, comprehensive approach to the U.S. air defense suppression mission but has recently completed an Analysis of Alternatives that presented the services with 27 options for replacing the aging EA-6B. The services formed a coordinating group to assess the options, and in June 2002 presented service-specific proposals to the Office of the Secretary of Defense for analysis and consideration in the 2004 budget. However, the Analysis of Alternatives did not provide the basis for a comprehensive strategy to address the department's overall suppression needs. It only analyzed the airborne electronic attack portion of the mission and did not address needed improvements in aircraft self-protection systems or the technical and funding challenges of other service programs such as the Navy's and Air Force's air-launched decoy programs. |
Farming has always been a risky endeavor, and farmers have always had to manage risk as a part of doing business. Over the years, the federal government has played an active role in several ways to help mitigate the effects of production losses and low prices on farm income. For example, USDA's Risk Management Agency (RMA) administers the federal crop insurance program to protect farmers against major production losses. Under this program, RMA subsidizes the federal multiple-peril crop insurance program, which allows insured farmers to receive an indemnity payment if production falls below a certain level. In addition, to help protect farmers against the risk of low crop prices, USDA’s Farm Service Agency administered price- and income-support programs for farmers who grew certain crops--corn, wheat, grain sorghum, barley, oats, cotton, and rice. The 1996 farm bill changed the government’s role. It replaced the income- support programs with “production flexibility contracts” that provide for fixed but declining annual payments to participating farmers from 1996 through 2002. These government payments--known as transition payments--are not tied to market prices, and participating farmers are not restricted with regard to the type or amount of crops that they plant, as they were in the earlier programs. Furthermore, unlike the deficiency payments of the last 6 decades, the transition payments do not rise in years when crop prices are low, nor do they fall in years when prices are high. As shown in table 1, the 1996 farm bill specified that transition payments would total about $36 billion over the 7-year period, declining from about $5.6 billion in 1996 to about $4 billion in 2002. By giving farmers increased flexibility in deciding which crops to plant, the 1996 farm bill allows them to choose the particular crop or combination of crops that they believe offers the best chance to maximize their profits and offset the decline in income resulting from lower government payments. However, the increased flexibility in planting decisions brings other risks. For example, small increases in expected profits may lead many farmers to decide to increase the acreage devoted to a particular crop. This, in turn, could result in the increased production of the crop nationwide and ultimately in lower prices as a result of the greater supply. Section 192 of the 1996 farm bill required that USDA, in consultation with the Commodity Futures Trading Commission (CFTC), educate farmers in managing the financial risks inherent in producing and marketing agricultural commodities. The act specified that, as a part of such education activities, USDA may develop and implement programs to assist and train farmers in using (1) forward contracts, which enable farmers to lock in a price for their crop or livestock production prior to harvest or slaughter, (2) crop insurance, which ensures compensation if crop yields are substantially lower than expected, and (3) hedging--buying or selling futures or options contracts on a commodity exchange, such as the Chicago Board of Trade--which reduces the risk of receiving lower prices for crops or livestock. The act authorized USDA to use its existing research and extension authorities and resources to implement this provision. In March 1997, the Secretary of Agriculture organized a steering committee to direct the government's education activities for managing agricultural risk. The steering committee is chaired by RMA’s administrator and includes a CFTC commissioner; the administrator of USDA’s Cooperative State Research, Education, and Extension Service (CSREES); and the director of USDA's National Office of Outreach. These agencies have different responsibilities. RMA primarily administers the federal crop insurance program; the 1996 farm bill expanded its authority to include a broader risk management perspective. CFTC, which regulates commodity futures and options trading in the United States, also develops and maintains research and informational programs concerning futures and options trading for farmers, commodity market users, and the general public. CSREES develops and conducts agricultural research, higher education, and extension programs to provide education and technical assistance to farmers and the general public. USDA's National Office of Outreach is responsible for ensuring that information, technical assistance, and training are available to all USDA customers, with an emphasis on underserved populations. USDA’s 1996 Agricultural Resource Management Study (Phase 3), based on a statistical sample of farmers, found that about 42 percent of the nation’s 2 million farmers used at least one of the risk management tools--forward contracts, crop insurance, or hedging--to manage their income risk. In 1996, a substantially greater percentage of farmers with agricultural sales of at least $100,000 (large-scale farmers) used each risk management tool than did farmers whose agricultural sales were less than $100,000 (small- scale farmers). Similarly, a greater percentage of farmers whose primary crops were corn, wheat, or cotton purchased crop insurance and used forward contracts than did farmers who grew other field crops. (App. II provides detailed data on farmers’ use of risk management tools by sales level, commodity, geographic region, and the receipt of USDA transition payments.) Table 2 shows that, among all U.S. farmers, a substantially greater percentage of large-scale farmers used each risk management tool than did small-scale farmers in 1996. Among large-scale farmers, at least 52 percent purchased crop insurance, at least 55 percent used forward contracts, and at least 32 percent engaged in hedging. In contrast, no more than 16 percent of small-scale farmers purchased crop insurance, no more than 29 percent used forward contracts, and no more than 22 percent engaged in hedging. Available data were insufficient to determine whether large-scale farmers hedged with futures or options contracts to a greater extent than small-scale farmers in 1996. Table 3 shows that at least 70 percent of those large-scale farmers who received transition payments purchased crop insurance, at least 66 percent used forward contracts, and at least 34 percent engaged in hedging in 1996. However, the minimum extent of usage was even greater among farmers who had more than $500,000 in sales and received transition payments--at least 73 percent purchased crop insurance, at least 78 percent used forward contracts, and at least 50 percent engaged in hedging in 1996. As table 4 shows, among all U.S. farmers, a greater percentage of those whose primary crop was corn, wheat, or cotton purchased crop insurance and engaged in forward contracting than did farmers who grew other field crops or raised livestock in 1996. Among farmers who primarily grew corn, wheat, and cotton, at least 54 percent purchased crop insurance and at least 50 percent used forward contracts. In contrast, among farmers who primarily raised other field crops, 43 percent at most purchased crop insurance and 45 percent at most used forward contracts. In addition, hedging was used by at least 35 percent of cotton farmers, which was a higher percentage than for farmers who grew other field crops in 1996. However, available data were insufficient to determine whether corn and wheat farmers engaged in hedging with futures or options contracts to a greater extent than did farmers who primarily raised other crops or livestock. Table 5 shows that, among corn farmers who received transition payments, at least 54 percent purchased crop insurance, at least 61 percent used forward contracts, and at least 31 percent engaged in hedging in 1996. Among wheat farmers who received transition payments, at least 81 percent purchased crop insurance, at least 46 percent used forward contracts, and at least 15 percent engaged in hedging. Among cotton farmers who received transition payments, at least 88 percent purchased crop insurance, at least 59 percent used forward contracts, and at least 25 percent engaged in hedging. To prepare farmers for managing their risks, USDA has focused primarily on developing regional or state partnerships of government, university, and private organizations to foster a risk management educational program. The university partners developed and implemented a series of regional and local risk management conferences targeted initially at groups that influence farmers--bankers, crop insurance agents, grain elevator operators, and agricultural educators. USDA expects that these individuals will provide farmers with specific information for using risk management tools as the program continues. During fiscal year 1998, USDA also awarded 17 grants for risk management education projects, provided funding to land grant universities to promote additional risk management education efforts, and funded the development of an electronic risk management education library. In fiscal year 1998, USDA obligated $5 million of RMA’s $10 million for crop insurance research to RMA’s risk management education initiatives-- amounting to about $2.50 per U.S. farmer. These funds were the predominant source of risk management education funding within USDA. In comparison, a CSREES official told us that CSREES typically obligates only about $100,000 per year, primarily for specific risk management education projects. The official noted that land grant universities may also use a portion of their general CSREES education funding to support risk management education projects; however, the amount that universities spent in fiscal year 1998 is not known. For fiscal year 1999, USDA has allocated $1 million of RMA’s $3.5 million for crop insurance research to risk management education. In response to the 1996 farm bill’s requirement that it educate farmers about managing their production and marketing risks, USDA used a September 1997 national risk management education summit to initiate a series of 20 national and regional risk management education conferences. USDA’s conferences focused on developing partnerships with “third-party influencers” in an effort to leverage the available government funds to train those who are in a position to educate farmers on risk management tools. According to USDA’s director of risk management education, the training would enable third-party influencers to demonstrate to farmers how the various tools fit together in an overall risk management and marketing plan. These individuals interact frequently with farmers and are in a position to influence the risk management decisions farmers make. For example, land grant college or extension service educators provide various training and advisory services to farmers on both the production and business aspects of farming. Crop insurance agents meet with farmers several times during the year as the farmers decide on insurance coverage levels and provide the agents with information on acres planted and final crop production levels. The bank or farm credit services loan officers meet with farmers to discuss business plans and arrange for operating loans. Commodity brokers interact with farmers who choose to engage in hedging with futures or options. Farmers interact with grain elevator operators when they sell their crops on either a cash or forward contract basis. According to RMA, the conferences helped participants to gain information and knowledge about areas outside their own expertise. For example, commodity brokers learned more about crop insurance, and crop insurance agents learned more about the futures market. As of December 1998, USDA’s major conferences had reached a relatively small percentage of the target groups’ members. Table 6 shows that 335 (2 percent) of about 15,000 crop insurance agents in the United States had attended a USDA-sponsored risk management conference. Similarly, only 251 bankers and 96 grain elevator operators had attended the conferences, although there are about 3,200 agricultural banks and about 10,000 grain elevators in the United States. About 20 percent of the conference attendees were USDA or other government agency employees, rather than members of the groups influencing farmers. Conference speakers generally presented broad, overview information about a number of farm management areas without providing detailed information addressing specific problems in any single area. According to RMA officials, providing overview information was appropriate because it enabled participants to appreciate how their specialty area interacts with other areas for the benefit of farmers. USDA also expanded the scope of the conferences to discuss more than the two risk areas that the 1996 farm bill had identified--producing and marketing agricultural commodities. Sections of the conferences also addressed tools for reducing financial risks, legal risks, and human resource risks, in addition to tools for reducing production and marketing risks. RMA officials noted that financial, legal, and human resource risks are also significant concerns for farmers. RMA officials consider the risk management conferences to be a first step in developing regional and state partnerships with USDA, universities, and private organizations to provide risk management education to farmers. USDA has designated five land grant university educators as regional coordinators of its risk management education program. (App. III identifies, for each region, the coordinator’s university affiliation, the associated RMA regional service offices, and the states covered.) The regional coordinators are responsible for (1) working with private sector partners, including bankers, crop insurance company representatives, and farmer organizations, to develop regional and local conferences, meetings and other training efforts and (2) serving as a focal point for providing information about the risk management education opportunities in each region. State and local educational activities, training sessions, and events sponsored by these partnerships have begun to reach additional farmers and individuals who influence farmers’ decisions. In fiscal year 1998, USDA spent $1.5 million to support the risk management conferences and initiate regional partnerships, including about $300,000 for the conferences, $250,000 for publications and materials, $133,000 for the regional coordinating offices, and $45,000 for an evaluation project. USDA also spent about $350,000 for special outreach projects designed to enhance the risk management skills of small and minority producers in areas described as underserved by traditional risk management tools and $50,000 to sponsor a Future Farmers of America essay contest on risk management. In addition to sponsoring conferences and developing regional partnerships, USDA awarded a series of risk management education and research grants totaling $3 million. In February 1998, USDA issued a request for proposals in the Federal Register. Subsequently, a peer review team, working under the risk management education steering committee, evaluated 107 proposals requesting over $19 million. In June 1998, USDA awarded 17 risk management education grants, ranging from $19,172 to $250,000, and averaging about $178,000. USDA awarded 12 grants to land grant colleges and universities, 3 to other educational entities, 1 to a crop insurance industry organization, and 1 to a grain elevator industry organization. Most of the grants included additional public and private sector partners who agreed to participate in the projects with the primary grantees. With expected project completion dates ranging from the summer of 1999 through the fall of 2001, the projects are currently ongoing, and thus, in many cases, the training phase has not begun. The grant projects target diverse audiences--ranging from farmers with limited resources, farmers growing specific commodities in individual states or regions, and dairy farmers to crop insurance agents and grain elevator operators across the country--and were for diverse purposes. For example, the grantees focused on different geographic coverages: seven planned national coverage, four targeted regional audiences, and six directed their efforts in a single state. Similarly, some of the grantees focused on particular groups: four targeted limited resource or minority farmers, one focused on the risk management needs of citrus farmers, and one focused on dairy farmers. Typically, the projects focused on training, including a curriculum development phase, a "train the trainer" phase, and a series of seminars or workshops. However, two grants provided for research about farmers’ use of and need for risk management tools. (App. IV provides information about the grantees, grant amount, and objectives for each of the 17 grants.) As a third element of its risk management education initiative in fiscal year 1998, USDA provided $362,000, divided among 96 land grant colleges and universities, to promote and augment their risk management education programs. According to USDA, these funds enabled the cooperative extension system to reach farmers during the winter of 1998-99 with a substantial risk management curriculum, including (1) regional video teleconferences, (2) small producer workshops at the local level, and (3) fact sheets, teaching guides, and classroom visual aids adapted to agricultural conditions in a particular state. In the fourth part of its response to the legislative mandate, USDA entered into a $200,000 contract with the University of Minnesota to develop an Internet website that provides an electronic library of risk management education materials. As of January 1999, the website contained over 700 risk management publications, presentations, decision aids, and other materials either resident on the site or linked to it. This information is useful to farmers as well as to the groups that influence them. On average, about 60 individuals per day made use of the website in January 1999. We provided the U.S. Department of Agriculture with a draft of this report for review and comment. We met with Agriculture officials, including the Administrator of the Risk Management Agency, who stated that the agency agreed with the report and that the report was balanced and accurate. However, the Department believed that the report should (1) provide more detailed information on how the $5 million for risk management education initiatives was spent, (2) discuss the Risk Management Agency’s regional and local risk management conferences in the context of its broader effort to establish public and private partnerships, and (3) discuss the Risk Management Agency’s efforts to provide risk management education through land grant universities as a separate initiative. We revised the report to more fully identify the various education initiatives that the Risk Management Agency has funded, explain that one of the purposes of the agency’s conferences was to foster public-private partnerships, and identify the support for the outreach efforts of land grant universities as a separate initiative. In addition, the Department provided comments to improve the report’s technical accuracy, which we incorporated as appropriate. To determine the extent to which various groups of farmers have used risk management tools, we obtained national agricultural survey data from USDA's Agricultural Resource Management Study (Phase 3) for 1996-- formerly called the Farm Costs and Returns Survey. The 1996 survey, based on a statistical sample, provides the most current, comprehensive data on farmers’ use of risk management tools. About 7,300 farmers responded to the risk management questions. The 1997 study did not include specific questions about risk management strategies because it was designed to accommodate questions required by the 1997 agricultural census. USDA’s Economic Research Service, which recently published an analysis of the 1996 survey data, provided the statistical data for this report. To identify education programs and projects USDA has directed or initiated to prepare farmers for managing risk, we interviewed and obtained documentation from USDA headquarters and regional officials, as well as from regional risk management coordinators. To determine the groups or individuals who have participated in or been served by these programs, we interviewed and obtained documentation from cognizant USDA officials, academicians, and other private sector organizations involved in planning and carrying out risk management seminars and other educational and research efforts. We also interviewed representatives of farmer organizations about RMA’s approach. We performed our work from June 1998 through February 1999 in accordance with generally accepted government auditing standards. We did not, however, independently verify data obtained from USDA officials and documents. USDA's Agricultural Resource Management Study data are the only comprehensive data available that examine farmers’ use of risk management tools. We are sending copies of this report to Representative Larry Combest, Chairman, House Committee on Agriculture, and appropriate congressional committees. We are also sending copies to the Honorable Dan Glickman, the Secretary of Agriculture; the Honorable Jacob Lew, Director, Office of Management and Budget; and other interested parties. We will also make copies available upon request. Please contact me at (202) 512-5138 if you or your staff have any questions about this report. Major contributors to this report are Richard Cheston, Mary Kenney, Renee McGhee-Lenart, and Robert R. Seely, Jr. The following are brief explanations of the three risk management tools discussed in our report: Crop insurance: Protects participating farmers against the financial losses caused by events such as droughts, floods, hurricanes, and other natural disasters. Federal crop insurance offers farmers two primary types of insurance coverage. The first--called catastrophic insurance-- provides protection against the extreme losses of crops for the payment of a $60 processing fee, whereas the second--called buyup insurance-- provides protection against the more typical smaller losses of crops in exchange for a premium paid by the farmer. Forward contract: A cash market transaction in which two parties agree to buy or sell a commodity or asset under agreed-upon conditions. For example, a farmer or rancher agrees to sell, and a local grain elevator or packing plant agrees to buy, the commodity or livestock at a specific future time for an agreed-upon price or on the basis of an agreed on pricing mechanism. With this agreement, a farmer locks in a final price for a commodity prior to harvest or slaughter. Hedging: The purchase or sale of a futures contract or an option on an organized exchange, such as the Chicago Board of Trade. A hedge is a temporary substitute for an intended subsequent transaction in the cash market to minimize the risk of an adverse price change. For example, corn farmers interested in locking in the sale price of all or part of their crops would sell corn futures as a temporary substitute for the cash market sale they intend to make at a later date. The sales transaction is carried out through a commodity broker. More specifically: Futures contract: An agreement for the purchase or sale of a standardized amount of a commodity, of standardized quality grades, during a specific month, on an organized exchange and subject to all terms and conditions included in the rules of that exchange. Option: The right, but not the obligation, to buy or sell a specified number of underlying futures contracts or a specified amount of a commodity, currency, index, or financial instrument at an agreed- upon price on or before a given future date. Other tools are also available to help farmers manage their risks. For a brief discussion of these tools, see “Risk Management: Farmers Sharpen Tools to Confront Business Risks,” Agricultural Outlook, March 1999. This appendix provides detailed information that we obtained from the U.S. Department of Agriculture’s (USDA) Economic Research Service concerning farmers’ use of risk management strategies. This information is based on the 1996 Agricultural Resource Management Study; about 7,300 farm operators responded to the risk management questions. Using the data the Service provided, we calculated confidence intervals. The Economic Research Service’s estimates and associated confidence intervals are presented in tables II.1 through II.12. Confidence interval (Dollars in millions) Agricultural sales(Dollars in millions) Defined as operator has household income under $20,000, farm assets under $150,000, and gross farm sales under $100,000. Defined as operator’s primary occupation is retired. Defined as operator’s primary occupation is “other”--neither farming nor retired. Defined as operated by nonfamily corporations, cooperatives, or hired managers. Confidence interval (Range) Confidence interval (Range) Confidence interval (Range) 19-29 (1)-19Defined as operator has household income under $20,000, farm assets under $150,000, and gross farm sales under $100,000. The operator’s primary occupation is retired. The operator’s primary occupation is “other”--neither farming nor retired. Operated by nonfamily corporations, cooperatives, or hired managers. Confidence interval (Dollars in millions) Agricultural sales(Dollars in millions) 100,034 (1,117) $4,510.8 ($63.2) Defined as operator has household income under $20,000, farm assets under $150,000, and gross farm sales under $100,000. Confidence interval calculations are not exact because of the small sample size or other characteristics of the sample results. Defined as operator’s primary occupation is retired. Defined as operator’s primary occupation is “other”--neither farming nor retired. Defined as operated by nonfamily corporations, cooperatives, or hired managers. Confidence interval (Range) Confidence interval (Range) Confidence interval (Range) 36-56 (14)-4418-42 (9)-31Defined as operator has household income under $20,000, farm assets under $150,000, and gross farm sales under $100,000. USDA is required to protect the privacy of respondents by withholding data if it receives too few responses in a particular category. Defined as operator’s primary occupation is retired. Defined as operator’s primary occupation is “other”--neither farming nor retired. Defined as operated by nonfamily corporations, cooperatives, or hired managers. Confidence interval (Dollars in millions) Agricultural sales(Dollars in millions) Table II.6: Percentage of Farmers Who Used Each Risk Management Tool, by Principal Commodity, 1996 Confidence interval (Range) Confidence interval (Range) Table II.7: Number of Farmers Who Received Transition Payments and the Value of Their Agricultural Sales, by Principal Commodity, 1996 Confidence interval (Dollars in millions) Agricultural sales(Dollars in millions) (1,605) $4,470.5 Table II.8: Percentage of Farmers Who Used Each Risk Management Tool Among Those Who Received Transition Payments, by Principal Commodity, 1996 Confidence interval (Range) Confidence interval (Range) Confidence interval (Range) USDA is required to protect the privacy of respondents by withholding data if it receives too few responses in a particular category. Confidence interval (Dollars in millions) Agricultural sales (Dollars in millions) $7,707.4 ($38.8) Includes Kansas, Nebraska, North Dakota, and South Dakota. Includes Illinois, Indiana, Iowa, Missouri, and Ohio. Includes Michigan, Minnesota, and Wisconsin. Includes Arizona, Colorado, Idaho, Montana, Nevada, New Mexico, Utah, and Wyoming. Includes California, Oregon, and Washington. Includes Kentucky, North Carolina, Tennessee, Virginia, and West Virginia. Includes Alabama, Florida, Georgia, and South Carolina. Includes Arkansas, Louisiana, and Mississippi. Includes Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont. Includes Oklahoma and Texas. Confidence interval (Range) Confidence interval (Range) Confidence interval (Range) Includes Kansas, Nebraska, North Dakota, and South Dakota. Includes Illinois, Indiana, Iowa, Missouri, and Ohio. Includes Michigan, Minnesota, and Wisconsin. Includes Arizona, Colorado, Idaho, Montana, Nevada, New Mexico, Utah, and Wyoming. Includes California, Oregon, and Washington. Includes Kentucky, North Carolina, Tennessee, Virginia, and West Virginia. Includes Alabama, Florida, Georgia, and South Carolina. Includes Arkansas, Louisiana, and Mississippi. Includes Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont. Includes Oklahoma and Texas. Confidence interval (Dollars in millions) Agricultural sales(Dollars in millions) 9,632 (2,845) $2,962.3 ($1,145.3) Includes Kansas, Nebraska, North Dakota, and South Dakota. Includes Illinois, Indiana, Iowa, Missouri, and Ohio. Includes Michigan, Minnesota, and Wisconsin. Includes Arizona, Colorado, Idaho, Montana, Nevada, New Mexico, Utah, and Wyoming. Includes California, Oregon, and Washington. Includes Kentucky, North Carolina, Tennessee, Virginia, and West Virginia. Includes Alabama, Florida, Georgia, and South Carolina. Includes Arkansas, Louisiana, and Mississippi. Includes Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont. Confidence interval calculations are not exact because of the small sample size or other characteristics of the sample results. Includes Oklahoma and Texas. Confidence interval (Range) Confidence interval (Range) Confidence interval (Range) 42-84 (7)-7141-73 (32)-9067-87 (32)-82Includes Kansas, Nebraska, North Dakota, and South Dakota. Includes Illinois, Indiana, Iowa, Missouri, and Ohio. Includes Michigan, Minnesota, and Wisconsin. Includes Arizona, Colorado, Idaho, Montana, Nevada, New Mexico, Utah, and Wyoming. Includes California, Oregon, and Washington. Includes Kentucky, North Carolina, Tennessee, Virginia, and West Virginia. Includes Alabama, Florida, Georgia, and South Carolina. Includes Arkansas, Louisiana, and Mississippi. Includes Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont. Includes Oklahoma and Texas. Cognizant RMA regional service office(s) Integrated Risk Management Education ($248,461) Grantee: South Central Technical College (North Mankato, Minnesota) The objective of this project is to develop an integrated risk management education curriculum and deliver it via educational programs for farmers in Minnesota, North Dakota, and South Dakota. The project will develop local educational teams of agricultural professionals. Understanding Farmer Risk Management Decision Making and Educational Needs ($243,388) Grantee: Mississippi State University The objective of this project is to develop the knowledge base to guide the design and implementation of effective risk management programs for agricultural producers. The project will identify the risk management objectives of diverse agricultural producers, investigate perceptions and understanding of risk management tools and strategies, examine the factors influencing choices of risk management strategy, and study how information and analysis influence producers’ perceptions and risk management choices. Risk Management Education With Focus on Producers and Lender Stakeholders ($250,000) Grantee: Pennsylvania State University The objective of this project is to help farmers and lenders manage risks and expand the understanding of risk management with a focus on farmer liquidity constraints. The project will develop and distribute a risk management curriculum to farmers, provide training and workshops, improve risk management financial expertise with workshop applications tailored to lenders, and use computers and telecommunications in risk management education. Managing Risks and Profits for the National Grain Industry: A Whole-Farm Approach ($72,180) Grantee: Ohio State University Extension Service The objective of this project is to create and deliver information and analytical tools to help grain farmers and agribusinesses manage their risks and profits for entire farms. The project will create and revise risk management programs for whole-farm assessment, analyze profit levels and cash-flow risks, create a risk management center at Iowa State University, measure the risk tolerance of farm operators, and analyze the effectiveness of innovative information delivery systems. National Program for Integrated Dairy Risk Management Education and Research ($129,600) Grantee: Ohio State University The objective of this project is to focus public and private expertise on generating understandable, useful, and results-oriented knowledge and tools for the dairy industry. The project will develop a risk management educational curriculum for dairy producers, conduct symposia and regional training workshops, develop relevant computer software, and distribute information electronically. Optimal Grain Marketing: Integrated Approach to Balance Risks and Revenues ($232,800) Grantee: National Grain and Feed Foundation The objective of this project is to develop information on commonly available risk management tools coupled with an assessment of how such tools can be expected to perform. The project will reach 500 elevator operators and 20,000 farmer customers with a standardized methodology for evaluating new products, with an emphasis on the use of cash contracts. Agricultural Risk Management Education for Small and Socially Disadvantaged Farmers ($229,808) Grantee: Virginia State University Cooperative Extension Service The objective of this project is to create risk management educational materials and help socially disadvantaged and limited-resource farmers in Virginia, Maryland, Delaware, and North Carolina understand how to manage risk. This project will nurture a partnership between the private crop insurance industry and certain land-grant colleges in the four states, providing a model for similar efforts elsewhere. The project will also integrate risk management education into outreach, training, and technical assistance programs for small-scale farmers. Delivery of Agricultural Risk Management Education to Extension Officers and Small-Scale Farmers ($150,000) Grantee: Alcorn State University The objective of this project is to develop and implement risk management education for students, extension agents, small-scale farmers, limited- resource cooperatives, industry groups, and community-based organizations within 28 Mississippi counties. It will help small-scale farmers limit their exposure to marketing, financial, and legal risks. Georgia Agricultural Risk Management Education Program ($250,000) Grantee: Georgia Department of Education The objective of this project is to train producers and agribusinesses in risk management. The project will train young farmers to provide risk management assistance and provide instructional material and technology to increase managerial skills in agricultural operations. It will provide risk management training for minority, limited-resource farmers, and migrant workers in 134 Georgia counties and establish a certified risk management program for farm workers. Pacific Northwest Risk Management Education Project ($236,339) Grantee: Washington State University The objective of this project is to help Pacific Northwest cereal grain producers improve and apply risk management skills. The project will develop a research-based educational curriculum to increase understanding of risk management tools and integrate areas of risk management in a decision-making process for small grain producers. The project will deliver a producer-oriented risk management program to more than 1,000 grain producers. Risk Management Research and Education for the Florida Citrus Industry ($19,172) Grantee: University of Florida Cooperative Extension Service The objective of this project is to develop appropriate risk management tools and strategies for citrus growers in 32 southern Florida counties. This project will help growers to understand their increased exposure to risk and to use risk management tools and strategies. Risk Management Education: A Risk-Management Club Approach ($150,000) Grantee: Kansas State University The objective of this project is to extend applied risk management information to agricultural producers and agricultural businesses in Kansas. The project will establish local risk management clubs and survey club members to determine risk perceptions, risk management skill levels, and educational needs. It will plan and conduct educational meetings, and carry out follow-up evaluations to measure the effectiveness of the risk management club approach. Leveraging Risk Management Education Using Crop Insurance Agents ($166,500) Grantee: National Crop Insurance Services The objective of this project is to broaden the understanding of risk management principles among more than 15,000 crop insurance agents nationwide. The project will train crop insurance agents in risk management and foster a partnership involving extension specialists, crop insurance agents, and socially disadvantaged and limited-resource farmers. The project will begin a conference series on risk management modeled after one in North Dakota. Economic Performance and Producer Use of Market Advisory Service Products ($250,000) Grantee: University of Illinois Cooperative Extension Service The objective of this project is to provide producers of corn, soybeans, and wheat with an objective, comprehensive evaluation of the economic performance of crop market advisory services. It will describe subscribers’ use of market advisory services, current risk management practices, and the educational needs of crop producers. Comprehensive Risk and Business Planning: A Case Plan Approach ($106,841) Grantee: University of Nebraska The objective of this project is to help producers and others in risk management consulting and educational efforts understand comprehensive business planning. Participants will learn to prepare business plans for each commodity to address various situations. The project will encourage producer groups to develop comprehensive risk management and business plans, and will create and maintain an online forum on risk and financial management. Develop AgRisk 2000 ($206,150) Grantee: University of Illinois Cooperative Extension Service The objective of this project is to develop and provide a comprehensive risk management tool that which can be used by farmers, lenders, and service providers to evaluate pre-harvest risk management strategies. The project is targeted at producers located in the Corn Belt, Wheat Belt, Delta Region, and Southern States. Risk Management Education for Limited-Resource Latino Family Farmers in California's Central Coast ($85,000) Grantee: Association for Community Based Education The objective of this project is to improve the risk management skills of limited-resource Latino family farmers in California's central coast. The project will improve the farmers' capacity to understand the risk associated with their business, analyze risks and use information in problem-solving and decision-making, and incorporate risk management education into a small-farm production and management curriculum. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Agriculture's (USDA) efforts to educate farmers about risk management, focusing on: (1) the extent of farmers' use of risk management tools; and (2) educational programs and projects USDA has directed or initiated to prepare farmers for managing risks and determining the groups or individuals who have participated in or been served by these programs. GAO noted that: (1) in 1996, about 42 percent of the nation's 2 million farmers used one or more risk management tools to limit potential income losses resulting from falling market prices or production failures, according to USDA estimates; (2) the use of these tools varied by farmers' level of sales and primary commodity (crop or livestock); (3) the use of crop insurance and forward contracts to reduce risk was more prevalent among farmers: (a) with at least $100,000 in annual sales of agricultural products than among those with annual sales under $100,000; and (b) whose primary crops were corn, wheat, and cotton than among those who primarily grew other crops; (4) of those farmers who received USDA transition payments and had sales of at least $100,000, at least 70 percent purchased crop insurance, at least 66 percent used forward contracts, and at least 34 percent engaged in hedging in 1996; (5) in fiscal year 1998, USDA obligated $5 million for four educational initiatives to prepare farmers for managing risk; (6) to develop government and private sector partnerships to foster risk management education, USDA sponsored a series of risk management conferences targeted at bankers, agricultural educators, crop insurance agents, commodity brokers, and grain elevator operators; (7) however, these initial conferences reached only a relatively small percentage of these target groups' members; (8) USDA intends to use partnerships with private- sector organizations to further expand its educational outreach activities; (9) USDA awarded 17 risk management education and research grants that are primarily designed to develop risk management education curriculums for training such diverse groups as farmers with less than $20,000 in annual income, farmers who grow specific crops in individual states or regions, crop insurance agents, and grain elevator operators across the country; (10) the expected completion dates for these projects range from the summer of 1999 through the fall of 2001; (11) USDA provided funding to supplement land grant universities' risk management education efforts; and (12) USDA contracted with the University of Minnesota to develop an Internet library that, as of January 1999, contained over 700 risk management publications and other education materials for farmers. |
The Howard M. Metzenbaum Multiethnic Placement Act of 1994 is one of several recent congressional initiatives to address concerns that children remain in foster care too long. As originally enacted, the law provided that the placement of children in foster or adoptive homes could not be denied or delayed solely because of the race, color, or national origin of the child or of the prospective foster or adoptive parents. However, the act expressly permitted consideration of the racial, ethnic, or cultural background of the child and the capacity of prospective parents to meet the child’s needs in these areas when making placement decisions—if such a consideration was one of a number of factors used to determine the best interests of a child. Furthermore, it required states to undertake efforts to recruit foster and adoptive families that reflect the racial and ethnic diversity of children in need of care. The 1996 amendment clarified that race, color, or national origin may be considered only in rare circumstances when making placement decisions.As amended, the act states that placement cannot be denied or delayed because of race, color, or national origin. Furthermore, the amendment removed language that allowed routine consideration of these factors in assessing both the best interests of the child and the capacity of prospective foster or adoptive parents to meet the needs of a child. An agency making a placement decision that uses race, color, or national origin would need to prove to the courts that the decision was justified by a compelling government interest and necessary to the accomplishment of a legitimate state purpose—in this case, the best interests of a child. Thus, under the law, the “best interests of a child” is defined on a narrow, case-specific basis, whereas child welfare agencies have historically assumed that same-race placements are in the best interests of all children. The amendment also added an enforcement provision that penalizes states that violate the amended act. The penalties range from 2 percent to 5 percent of the federal title IV-E funds the state would have received, depending upon whether the violation is the first or a subsequent one in the fiscal year. HHS estimates that the maximum penalty for a state with a large foster care population could be as high as $10 million in one year. Any agency, private or public, is subject to the provisions of the amended act if it receives federal funds. Agencies that receive funds indirectly, as a subrecipient of another agency, must also comply with the act. Such funds include but are not limited to foster care funds for programs under title IV-E of the Social Security Act, block grant funds, and discretionary grants. Before placements can be made, a child welfare agency must have an available pool of prospective foster and adoptive parents. In order to become foster or adoptive parents in California, applicants undergo a process that requires them to open all aspects of their home and personal life to scrutiny. Typically, these prospective parents attend an orientation and are fingerprinted and interviewed. They then attend mandatory training that can last up to 10 weeks. If they meet the minimum qualifications—such as a background free from certain types of criminal convictions—their personal life is then reviewed in detail by caseworkers.This review is called a homestudy. According to one county, 20 percent or fewer applicants reach this milestone. A homestudy addresses the financial situation, current and previous relationships, and life experiences of the applicant. It also addresses the abilities and desires of the applicant to parent certain types of children—including children of particular races—and other issues. Only when the homestudy process is completed, a written report of its findings approved by a child welfare agency, and the home found to meet safety standards is an applicant approved as a foster or adoptive parent. Caseworkers may then consider whether a prospective foster or adoptive parent would be an appropriate caregiver for a particular foster child. Social work practice uses the best interests of the child as its guiding principle in placement decisions. Caseworkers exercise professional judgment to balance the many factors that historically have been included when defining that principle. When considering what is in the best interests of the child, both physical and emotional well-being factors such as the safety, security, stability, nurturance, and permanence for the child are taken into consideration. In social work practice, the need for security and stability has included maintaining cultural heritage. The caseworker’s placement decision may also be affected by the administrative procedures used in an agency, the size of the pool of potential foster and adoptive parents, and, in some cases, individual caseworkers’ beliefs. An agency may have a centralized system for providing caseworkers with information on available homes, or it may be left to the caseworker to seek out an available foster home. Depending on the size of the pool of potential foster or adoptive parents and the needs of the child, a caseworker may have few or many homes to consider when making a placement decision. In any case, good casework practice includes making individualized, needs-based placements reflecting the best interests of a child. While the thrust of the act, as amended, is toward race-blind foster care and adoption placement decisions, other federal policies that guide placement decisions inherently tend toward placing children with parents of the same race. The Indian Child Welfare Act of 1978 grants Native American tribes exclusive jurisdiction over specific Native American child welfare issues. The Multiethnic Placement Act does not affect the application of tribal jurisdiction. Section 505 of the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 amended section 471(a) of the Social Security Act to require states to consider giving priority to relatives of foster children when making placement decisions. Some states, such as California, require that caseworkers first try to place a child with relatives—known as kinship caregivers—before considering other types of placement. Consequently, the Multiethnic Placement Act affects about one-half of the California foster care caseload—those foster and adoptive children who are not under tribal jurisdiction or cared for by relatives. HHS, the state of California, and foster care and adoption agencies in the two counties we reviewed took actions to inform agencies and caseworkers about the passage of the 1994 act. HHS also provided technical assistance to states, including working with states to ensure that state laws were consistent with the act. California changed state law and regulations, and the two counties we reviewed also changed policies to conform to the new law. In addition, the two counties provided training on the act to caseworkers responsible for making placement decisions. HHS recognized the significance of the change in casework practice that the 1994 law would require of child welfare agencies by restricting the use of race in placement decisions. In response, HHS launched a major effort to provide policy guidance and technical assistance. The underpinning for HHS’ actions was coordination among its units that do not customarily issue joint policies—such as the Children’s Bureau and the Office for Civil Rights—to ensure that the agency provided consistent guidance. These two units have the responsibility within HHS for implementing the act. The Children’s Bureau administers programs of federal financial assistance to child welfare agencies and has responsibility for enforcing compliance with the laws authorizing that assistance. The Office for Civil Rights has the responsibility for enforcing compliance with civil rights laws. HHS officials told us that this internal coordination was also essential because the agency itself needed to undergo cultural changes. For example, in order to provide joint guidance, officials in the Office for Civil Rights needed to understand a social work perspective on the role of race in making placement decisions, and officials in the Children’s Bureau needed to understand civil rights principles in the context of their programs. Officials told us that they also notified agency grantees of the act and reviewed selected documents to see that they were consistent with it. Within 6 weeks of enactment of the new law, HHS issued a memorandum to states that summarized the act and provided its text. About 5 months later—and 6 months before the act went into effect—HHS issued its policy guidance. (See app. III for the text of the guidance.) The guidance, jointly issued by the Children’s Bureau and the Office for Civil Rights, was based on legal principles in title VI of the Civil Rights Act of 1964. The guidance introduced key legal concepts and identified certain illegal practices, such as the use of a time period during which a search would occur only for foster or adoptive parents of the same race as the foster child. Some states believed that HHS’ guidance regarding the use of race in placement decisions was more restrictive than provided for in the act. However, HHS maintained that its guidance accurately reflected the statutory and constitutional civil rights principles involved. To assist states in understanding what they must do to comply with the act, officials from the Children’s Bureau and the Office for Civil Rights jointly provided training to state officials and discussed the new law with state child welfare directors in at least 10 states. In addition, HHS contracted with a National Resource Center for a monograph on the new law; the monograph was released at the time the act went into effect and provided additional guidance for states’ use when implementing the act. Finally, HHS made other information and resources available to states from its contracted Resource Centers, including assistance to individual states. To ensure that state laws were consistent with the act, the Office for Civil Rights reviewed each state’s statutes, regulations, and policies. It then worked with states whose laws did not conform to initiate corrective action. The review found that the statutes, rules, or policies of 28 states and the District of Columbia did not conform. All of them completed changes to comply with the 1994 law. Furthermore, as part of its ongoing efforts to determine whether agency policies and caseworker actions comply with civil rights law, including the act, the Office for Civil Rights continued to investigate complaints of discrimination that were filed with the agency. Past complaints have consisted, for example, of charges brought by foster parents who were not allowed to adopt a child who had been in their care; the denial of the opportunity to adopt the child was allegedly because the child was of a different race than the foster parents. Implementation of the 1994 act required changes to law and regulations at the state level and to policies at the county level. The state of California began its implementation efforts in August 1995 by issuing an informational memorandum to alert counties to the act before it went into effect. In addition, state officials began a collaborative effort with an association of county child welfare officials to devise an implementation strategy. The state also began the process of amending its state law to comply with the federal statute. When amended, the state law eliminated a discriminatory requirement that same-race placements be sought for 90 days before transracial placements could be made. The state also revised its adoption regulations after the state law was passed. State officials told us that it was not necessary to revise the foster care regulations because they were already consistent with the act. Although the change in state law eliminated the requirement to seek same-race placements, that provision had not previously been included in the foster care regulations. In addition, state officials believe that the act focused primarily on adoption issues. Thus, adoption regulations required revision, whereas foster care regulations did not. In the counties we reviewed, one county finished revision of its foster care and adoption policies in February 1996. The other county issued a memorandum to its staff in January 1996 to alert them to the new law. However, that county has not formally revised its foster care or adoption policies in over 20 years, according to one county official. The state and counties planned training on the 1994 law, but only the counties actually conducted any. The state planned to roll out training, but suspended the planned training when the act was amended in August 1996. State officials told us that they needed to revise the training to reflect the amendment. The two counties, however, developed their own training programs by relying on information they obtained from the county child welfare association. In both counties, supervisors in the adoption unit took the lead in developing and presenting one-time training sessions to foster care and adoption caseworkers. Most, if not all, foster care and adoption caseworkers in the two counties received training. Both counties also incorporated training on the 1994 act into their curriculums for new caseworkers. Following amendment of the act, HHS was slower to revise its policy guidance and provided less technical assistance to states than it did after the passage of the 1994 act. While California informed its counties of the change in federal law, it did not do so until 3 months after HHS issued its policy guidance on the amended act. Although HHS did not repeat its technical assistance effort to assist states in understanding the amended law, the state and counties we reviewed provided some training on the amended act to staff. HHS did not notify states of the change in the law until 3 months after its passage and did not issue policy guidance on the amendment until 6 months after the notification. (See app. IV for the text of the guidance.) As was the case with the policy guidance on the original act, HHS’ revised guidance was issued jointly by the Children’s Bureau and the Office for Civil Rights. The policy guidance noted changes in the language of the law, such as the elimination of the provision that explicitly permitted race to be considered as one of a number of factors. The guidance also described the penalties for violating the amended act and emphasized civil rights principles and key legal concepts that were included in the earlier guidance on the original act. The new guidance expressed HHS’ view that the amended act was consistent with the constitutional and civil rights principles that HHS used in preparing its original guidance. However, it was not until May 1998, when we submitted a set of questions based on concerns that county officials and caseworkers raised with us, that HHS issued guidance answering practical questions about changes in social work practice needed to make casework consistent with the amended act. (See app. V for a list of the questions and answers.) The guidance on social work practice issues clarified, for example, that public agencies cannot use race to differentiate between otherwise acceptable foster placements even if such a consideration does not delay or deny a child’s placement. The agency did not repeat the joint outreach and training to state officials that it provided for the 1994 act. While the technical assistance provided by the Resource Centers is ongoing, the monograph on the act has not yet been updated to reflect the amendment. The Office for Civil Rights took several actions to ensure that state actions were consistent with the amended act. It addressed case-by-case complaints of violations and, in 1997, began reviews in selected locations. Officials told us that it was not necessary to conduct another comprehensive review of state statutes because they said they would work with states on a case-by-case basis. In addition, officials explored the use of AFCARS to monitor foster care and adoption placements. HHS officials who work with AFCARS confirmed that neither the historical data needed to determine placement patterns related to race that may have existed before the 1994 act’s effective date nor the current information on most states’ foster children—including California’s—was sufficiently complete or adequate to allow its consideration in determining whether placement decisions included use of race-based criteria. Passage of the amendment in 1996 again required changes in state law, regulations, and policy. A bill was introduced in the California legislature in February 1998 to make California State law consistent with the federal amendment. The bill originally contained language to delete a nonconforming provision in state law that explicitly allows consideration of race as one of a number of factors in a placement decision. However, state officials told us the bill has been stalled in the legislative process and its passage is uncertain. Although federal law takes precedence over state law when such situations arise, an HHS Office for Civil Rights official told us that HHS encourages states to pass conforming legislation. Furthermore, state officials told us that state regulations on adoption and foster care placement cannot be changed until this bill becomes law. Therefore, California regulations continue to reflect only the 1994 law. In September 1997, the state notified its counties of the amendment to the act. Although counties can change their own policies without state actions, in the two counties we visited, only one has begun that process: in that county, the adoption unit has begun to update its regulations, but the foster care unit has not done so. Despite the lack of a change in state law, the state resumed its training activities in February 1998, when it offered its first training seminar on the amended act. A limited number of county workers in the southern portion of the state attended that seminar, which included 3 hours of training. The state held two additional training sessions in the state and plans to include training on the amended act at two other seminars. To date, the state has targeted the training to licensing and recruitment staff—who work with potential foster and adoptive parents—and not to caseworkers or supervisors who place children in foster and adoptive homes. But it is these latter staff who are most directly responsible for placement decisions and thus for complying with the amended act’s provisions. Finally, one of the two counties we visited is now developing written training material to reflect the 1996 amendment and has provided formal training on it to some workers. The other county charged its supervisors with training their staff one-on-one. Officials at all levels of government face a diverse set of challenges as they continue to implement the amended act. Major issues that remain include changing caseworkers’ and practitioners’ beliefs about the importance of race-based placement decisions, developing a shared understanding at all levels of government about allowable placement practices, and developing an effective federal compliance monitoring system. The belief that race or cultural heritage is central to a child’s best interests when making a placement is so inherent in social work theory and practice that a policy statement of the National Association of Social Workers still reflects this tenet, despite changes in the federal law. Matching the race of a child and parent in foster care placements and public agency adoptions was customary and required in many areas for the last 20 years. The practice was based on the belief that children who are removed from their homes will adapt to their changed circumstances more successfully if they resemble their foster or adoptive families and if they maintain ties to their cultural heritage. In this context, the childrens’ needs were often considered more compelling than the rights of adults to foster or adopt children. One state official made this point directly, stating that her purpose is to find families for children, not children for prospective parents. Officials’ and caseworkers’ personal acceptance of the value of the act and the 1996 amendment varies. Some told us that they welcomed the removal of routine race-matching from the child welfare definition of best interests of a child and from placement decisions. Those who held this belief said the act and the 1996 amendment made placement decisions easier. Others spoke of the need for children—particularly minority children—always to be placed in homes that will support a child’s racial identity. For those individuals, that meant a home with same-race parents. Furthermore, some who value the inclusion of race in placement decisions told us that they do not believe that the past use of race in the decision-making process delayed or denied placements for children. State program officials in California are struggling to understand the amended act in the context of casework practice issues. They are waiting for the HHS Children’s Bureau or the federal National Resource Centers to assist them in making the necessary changes in day-to-day casework practices. In particular, the use of different definitions by caseworkers and attorneys of what constitutes actions in a child’s best interests makes application of the act and the amendment to casework practice difficult. State officials characterized the federal policy guidance as “too legalistic.” Furthermore, although officials from the Office for Civil Rights have provided training to state officials and continue to be available to conduct training, these state officials do not consider Office for Civil Rights officials capable of providing the desired guidance on how to conduct casework practice consistent with the amended act; as a result, state officials are hesitant to request such guidance from the Office for Civil Rights. The officials in the two counties we visited said their implementation efforts were hampered by the lack of guidance and information available to them from federal and state sources. The questions on casework practice that we submitted to HHS arose in the course of our discussions with county officials and caseworkers. County officials stressed that they began their implementation efforts with little federal and state technical assistance to help them understand the implications of the act for making foster care and adoption placement decisions; they relied instead on an association of county child welfare officials to obtain the information they needed. Despite the counties’ efforts to independently obtain information to proceed with implementation, documents we reviewed in both counties reflected a lack of understanding of the provisions of the amended act. For example, in one county, a draft informational document that was being prepared to inform caseworkers about the amended act included permission for caseworkers to consider the ethnic background of a child as one of a number of factors in a placement decision, even though the 1996 amendment removed similar wording from federal law. In addition, while the caseworkers we interviewed were aware that the act and the 1996 amendment do not allow denial or delay of placements related to race, color, or national origin, some caseworkers were unsure how and when they are allowed to consider such factors in making placement decisions. The need for clear guidance on practical casework issues was demonstrated in a state-sponsored training session we attended in February 1998. The training consisted of presentations from four panelists: an attorney from the HHS Office for Civil Rights, an attorney from a National Resource Center, and two representatives from private agencies that recruit minority foster and adoptive parents for the state of California. While the panelists’ presentations noted that placements could not be denied or delayed for race-based reasons, they offered contradictory views of permissible activities under the law. For example, the panelists were asked if race could be used to choose a placement when two available families are equally suitable to meet the needs of a child but one family is of the same race as the child. The attorney from the Office for Civil Rights advised that race could not be used as the determining factor in that example, whereas the attorney from the Resource Center said that a case could be made for considering race in that circumstance. The state has since modified the training session to provide a more consistent presentation. However, the paucity of practical guidance contributes to continued uncertainty about allowable actions under the amended act. For example, although the act and the 1996 amendment apply equally to foster and adoption placements, some state and county officials told us that they believe it applies primarily to adoption placements. Federal officials will need to seek new ways to identify appropriate data and documentation that will allow them to effectively determine whether placement decisions conform to the provisions of the amended act. Federal AFCARS information is the primary source of federal administrative data about foster care and adoption. It allows HHS to perform research on and evaluate state foster care and adoption programs, and it assists HHS in targeting technical assistance efforts, among other uses. However, AFCARS data are not sufficient to determine placement patterns related to race that may have existed before the 1994 act’s effective date. Our examination of AFCARS indicated that the future use of this database for monitoring changes in placement patterns directly related to the amended act is unlikely. For example, the database lacks sufficient information on the racial identity of foster and adoptive children and their foster parents to conduct the type of detailed analysis of foster care and adoption patterns that would likely be needed to identify discriminatory racial patterns. Analysis of any administrative data will be hampered by difficulties in interpreting the results. Data showing a change in the percentage of same-race placements would not, alone, indicate whether the amended act was effective in restricting race-based placement practices. For example, an increase in the percentage of same-race placements for black foster children could indicate that the amended act is not being followed. Conversely, the same increase could mean that the amended act is being followed but more black foster and adoptive parents are available to care for children because of successful recruitment efforts. If relevant information on changes in the pool of foster and adoptive parents is not available for analysis—as is the case with AFCARS data—then it would not be possible to rule out the success of recruitment efforts as a contributor to an increase in same-race placements. While case files are another source of information about placement decisions, and such files are used in one type of review periodically performed by HHS, reviewing those files may provide little documentation to assist in determining whether placement decisions are consistent with the amended act’s restrictions on the use of race-based factors. In the two counties we visited, the processes caseworkers described for making placement decisions generally lacked a provision for documenting the factors considered, the placement options available, or the reason a particular placement was chosen. Our review of a very limited number of case files in one county, and our experience reading case files for other foster care studies, confirmed that it is unlikely the content of placement decisions can be reconstructed from the case files. The Multiethnic Placement Act, as amended, has been difficult for agencies to implement. Successful implementation requires changing state laws, policies, and regulations; organizational and personal beliefs in the value of race as a significant factor in making foster and adoptive placements; and casework practices so that they incorporate civil rights principles into the definition of a child’s best interests. The federal and state agencies we reviewed began the administrative portion of this task immediately after enactment in 1994. But early prompt action was not sustained after the act was amended. Furthermore, our discussions with California state officials, and our observation of state-sponsored training sessions, suggest that federal policy guidance was not sufficiently practice-oriented to allow caseworkers to understand how to apply the law to the placement decisions they make. Because foster care and adoption placement decisions are largely dependent upon the actions of individual caseworkers, their willingness to accept a redefinition of what is in the best interests of a child is critical to the successful implementation of this legislation. While some caseworkers welcomed the new law, others frankly discussed with us their concerns about eliminating almost all racial considerations from placement decisions. HHS and the state of California face the challenge to better explain to practitioners how to integrate social work and legal perspectives on the role of race in making decisions that are in a child’s best interests. Because these perspectives are not compatible, tension between them is inevitable. Without a resolution to that tension, full implementation of the amended act may be elusive. We provided HHS, the state of California, and the two counties in California that we reviewed with the opportunity to comment on a draft of this report. We received comments from HHS, the state of California, and San Diego County. In commenting on a draft of the report, HHS expanded on two topics addressed in the report: technical assistance, including training; and monitoring for compliance with the act and its amendment. In discussing technical assistance, HHS reiterated its implementation efforts as described in our report, provided information on related actions it has taken in states other than California, and noted that it expects to publish the updated monograph on the amended act in the fall of 1998. In commenting on the challenge of developing a compliance monitoring system, HHS described its pilot efforts to integrate monitoring of compliance with the amended act into its overall monitoring of child welfare outcomes and noted that it expects to publish a notice of its proposed monitoring processes in the Federal Register in October 1998. We agree that an integrated approach to compliance monitoring of child welfare issues could be an effective one. However, because we have not seen HHS’ proposal, we cannot assess whether the proposed monitoring will be sufficient to ensure that foster care and adoption placements are consistent with the requirements of the amended act. In this regard, HHS agreed that AFCARS data have limited utility in tracking state compliance with the amended act. HHS also made technical comments, which we incorporated where appropriate. The full text of HHS’ comments are contained in appendix VI. The state of California and San Diego County provided technical comments, which we incorporated where appropriate. As agreed with your office, we will make no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to the Secretary of Health and Human Services and program officials in California. We will also make copies available to others on request. Please contact me on (202) 512-7215 if you or your staff have any questions. Other GAO contacts and staff acknowledgments are listed in appendix VII. In addition to those named above, Patricia Elston led the federal fieldwork and coauthored the draft, and Anndrea Ewertsen led the California fieldwork and coauthored the draft. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the implementation of the Multiethnic Placement Act of 1994, as amended, at the federal level and in states with large and ethnically diverse foster care caseloads, focusing on: (1) efforts by federal, state, and local agencies to implement the 1994 act in the areas of assistance; (2) efforts by federal, state, and local agencies in these same areas to implement the 1996 amendment to the act; and (3) the challenges all levels of government face to change placement practices. GAO noted that: (1) the Department of Health and Human Services (HHS) and California initiated collaborative, multipronged efforts to inform agencies and caseworkers about the Multiethnic Placement Act of 1994; (2) HHS program officials recognized that the act requires child welfare agencies to undergo a historic change in how foster care and adoption placement decisions are made by limiting the use of race as a factor; (3) within 6 weeks of the act's passage, HHS took the first step in a comprehensive approach to implementation that involved issuing policy guidance and providing technical assistance; (4) some states believed that HHS' policy was more restrictive regarding the use of race in placement decisions than provided for in the act; (5) after enactment of the 1996 amendment, HHS did not update its policy guidance for 9 months, and it has done little to address casework practice issues; (6) California has yet to conform its state laws and regulations to the amended act; (7) the state provided training to some county staff, but the training was not targeted toward staff who have primary responsibility for placing children in foster or adoptive homes; (8) both counties have provided some training to caseworkers on the 1996 amendment, either through formal training sessions or one-on-one training by supervisors, however, only one county has begun to revise its policies; (9) changing long-standing social work practices, translating legal principles into practical advice for caseworkers, and developing compliance monitoring systems are among the challenges remaining for officials at all levels of government in changing placement decisionmaking; (10) the implementation of this amended act predominantly relies on the understanding and willingness of caseworkers to eliminate race from the placement decisions they make; (11) while agency officials and caseworkers understand that this legislation prohibits them from delaying or denying placements on the basis of race, not all believe that eliminating race will result in placements that are in the best interests of children; (12) state and local officials and caseworkers demonstrated lingering confusion about allowable actions under the law; (13) the state training session GAO attended on the amended act showed that neither the state nor HHS has provided clear guidance to caseworkers to apply the law to casework practice; and (14) federal efforts to determine whether placement decisions are consistent with the amended act's restrictions on the use of race-based factors will be hampered by difficulties in identifying data that are complete and sufficient. |
GPRA is intended to shift the focus of government decisionmaking, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. New and valuable information on the plans, goals, and strategies of federal agencies has been provided since federal agencies began implementing GPRA. Under GPRA, annual performance plans are to clearly inform the Congress and the public of (1) the annual performance goals for agencies’ major programs and activities, (2) the measures that will be used to gauge performance, (3) the strategies and resources required to achieve the performance goals, and (4) the procedures that will be used to verify and validate performance information. These annual plans, issued soon after transmittal of the president’s budget, provide a direct linkage between an agency’s longer-term goals and mission and day-to-day activities. Annual performance reports are to subsequently report on the degree to which performance goals were met. The issuance of the agencies’ performance reports, due by March 31, represents a new and potentially more substantive phase in the implementation of GPRA—the opportunity to assess federal agencies’ actual performance for the prior fiscal year and to consider what steps are needed to improve performance, and reduce costs in the future. VA’s mission reflects the nation’s historic commitment to care for veterans, their families, and their survivors. VA administers a variety of programs, including one of the world’s largest health care systems. VA estimates that, in fiscal year 2000, it spent about $42 billion—more than 80 percent of its total budget—to provide health care services to 3.6 million veterans and to pay disability compensation and pensions to over 3.2 million veterans and their families and survivors. This section discusses our analysis of VA’s performance in achieving its selected key outcomes and the strategies VA has in place, including strategic human capital management and information technology, for accomplishing these outcomes. In discussing these outcomes, we have also provided information drawn from our prior work on the extent to which VA provided assurance that the performance information it is reporting is credible. Overall, VA reported making good progress towards achieving its key outcome of providing quality health care to veterans at a reasonable cost to the government in fiscal year 2000. For example, VA reported that its average cost per patient was 2 percent less than last year. VA also reported that performance improved for most of its key measures compared to last year’s performance. However, VA reported a decline in performance for two key measures (see table 1). VA’s performance report, in general, demonstrated progress toward achieving its key performance goals. The key goals—the goals VA’s senior management consider most important—show how well VA is doing in providing quality health care to veterans at a reasonable cost. For each key measure, VA provided a discussion of the extent to which it met its fiscal year 2000 goal. In addition, VA provided baseline and performance data, where available, to show the extent to which performance has changed over several fiscal years. For most of the key goals that VA did not achieve, it explained why the goals were not achieved. Also, VA provided supplementary information to show that the performance deficiency was either not significant or that VA’s performance improved in fiscal year 2000. For example, VA reported that while it did not meet its patient satisfaction goals, its performance on other patient satisfaction surveys showed that VA patients were more satisfied than patients of private-sector health care providers. Also, VA noted that the differences between its planned and actual performance on the patient satisfaction measures were not significant, because they were within the margin of error for its annual patient satisfaction survey. VA could have done a better job, however, of explaining in its performance report why some key performance goals were not met. For example, VA did not explain why it did not meet its goal to have at least 75 percent of patients with scheduled appointments see a provider within 20 minutes of their scheduled appointment time. It provided a partial explanation of why it did not obtain at least 3.7 percent of its medical care funding from alternative revenue streams. However, VA’s performance plan cited factors contributing to the decline in collections. For example, the plan noted that more veterans are enrolling in managed care organizations from which VA cannot typically collect because it is not a participating provider. In addition, VA’s performance report included two key health care performance measures that VA has not yet quantified. These measures, based in part on previous GAO reports and recommendations, are for the percentages of patients who are able to obtain initial appointments within 30 days for primary or specialty care. Also, VA is in the process of improving its ability to collect the necessary data to measure its performance. It plans to use fiscal year 2001 data as its baseline for setting future annual performance goals. VA’s performance report shows VA’s continuing efforts to address deficiencies in the quality of its performance data. For most—though not all—of its key health care performance measures, VA identified the sources of performance data, and how data quality is assured. VA’s data quality initiatives include, for example, hiring a full-time Data Quality Coordinator, and revising coding procedures and training to help improve the collection of clinical data. Also, due in part to the IG’s recommendations, VA implemented edit checks of its system data to improve the quality of the data used to report the number of unique VA patients. In addition to health care performance data, VA also needs quality financial data. VA received an unqualified opinion on its fiscal year 2000 financial audit report. However, VA continues to experience problems with its financial management systems, including information security and integrated financial management systems weaknesses. Further, VA is unable to accumulate cost data at the activity level. Reliable cost information is needed for VA to assess its operating performance. VA’s performance report generally provides clear and reasonable descriptions of its strategies for correcting performance deficiencies and improving future performance on its key performance measures, even in those areas where VA met its fiscal year 2000 performance goal. For example, while VA met its goal for the Chronic Disease Care Index, which is a significant quality indicator, it provided strategies to continue to improve its performance in the future, including initiatives to improve patient safety and provide clinical training to medical staff. VA also identified numerous strategies and initiatives for improving performance in areas where performance goals were not met, such as enhancing provider/patient communications, expanding access to VA health care through increased use of community- based outpatient clinics and use of short-term contracts with non-VA specialists, expanding the use of clinical guidelines, and educating patients and staff on prevention programs. VA identified human capital strategies to improve patient access and appointment timeliness. For example, VA plans to hire additional clinical staff to improve access and appointment timeliness, add specialists to its primary care teams to provide veterans with a greater variety of services, even at some community-based clinics. VA’s discussions included some information technology strategies regarding health care. For example, VA is integrating telemedicine technologies into ambulatory care delivery systems to increase patient access and efficiency of health care delivery. VA noted that its facilities are equipped with compatible video-conferencing technology for facilitating geographically remote clinical consultations and patient examinations. VA reported making little progress toward achieving its key outcome of processing compensation and pension benefit claims timely and accurately in fiscal year 2000. Although VA did not meet any of its fiscal year 2000 key performance goals for this outcome, VA reported some improvement in the time required to resolve appeals of claims. However, VA reported that performance declined from fiscal year 1999 to 2000 with respect to the other key measures (see table 2). In its performance report, VA provided a clear discussion of the extent to which it met each of its key performance goals for fiscal year 2000. In addition, VA provided baseline and performance data where available, to show the performance over several fiscal years. In the discussion of its performance, VA noted that it expected timeliness to worsen in fiscal year 2001 because of the effect additional legislative and regulatory requirements will likely have on claims-processing time. For two of the key goals, VA explained why the goals were not achieved. Some of the reasons VA cited for the shortfall in the claims-processing timeliness and/or the national accuracy rate included VA (1) underestimating how long it would take to realize the positive impact of initiatives such as increased staffing, improved quality reviews, and training directed at specific deficiencies; (2) using a more rigorous quality review system than in the past; and (3) having to address complex regulatory changes affecting the manner in which claims are processed. Based, in part, on previous GAO reports and recommendations on claims processing, VA is strengthening its system for reviewing claims accuracy—the Statistical Technical Accuracy Review—by collecting more specific data on deficiencies concerning incorrect decisions in those regional offices that have accuracy problems. In addition, VA is evaluating and disseminating information on regional office practices that hold promise for improving performance nationwide, according to VA officials. While VA explained why it did not achieve its accuracy and timeliness goals for disability rating-related claims, it did not explain why it did not meet the timeliness goal for appeals resolution. VA noted improvement in its appeals resolution timeliness although it did not meet its established goal. VA’s performance report provides an increasing assurance that its performance information is credible. For example, VA is conducting independent reviews of a sample of claims to assess accuracy rates and weekly assessments of transactions to identify questionable timeliness data from regional offices. VA provided clear and reasonable discussions of strategies, including information technology initiatives, for improving future performance on key claims-processing goals. For example, VA is rewriting claims- processing manuals in an easy-to-understand format to enable employees to find information quickly. In addition, VA has implemented the Veterans On-Line APPlications (VONAPP) that allows veterans to electronically complete and submit applications for compensation, pension, and other benefits. However, we recently testified that VONAPP faces potential security vulnerabilities as a result of weaknesses in general support systems and operating subsystems access controls that affect the department’s overall computer operation. Also, VA is developing the Veterans Service Network’s Compensation and Pension Benefits Replacement System, which is expected to provide greater access to claimant information through a state-of-the-art automated environment. We testified that this project has suffered from numerous problems and schedule delays, which threaten the overall success of the initiative. In addition, VA is piloting, testing, or enhancing the operational capability for (1) the Compensation and Pension Record Interchange to provide enhanced accessibility to VHA records, (2) the Personnel Information Exchange System to allow for electronic exchange of military personnel records with DOD, and (3) the Virtual VA, to create a work environment for electronic claims processing. VA’s performance report discusses human capital strategies for dealing with the fact that one-fourth of its claims-processing staff will become eligible to retire over the next 5 years. VA’s succession planning strategy includes recruiting new staff, redirecting staff from other offices, and providing training. VA hired over 450 new claims-processing staff during fiscal year 2000. In addition, VA plans to redirect 200 existing staff to claims-processing positions and hire nearly 250 new staff during fiscal year 2001. Although VA identified human capital strategies for hiring, redirecting, and training staff, its performance plan does not identify performance goals and measures that are linked to the program improvement planned. VA is continuing to develop computer-assisted training modules and other materials on claims processing under the Training and Performance Support System to train the large wave of new hires and current employees who will replace prospective retirees. VA reported making good progress toward achieving its key outcome of assisting disabled veterans in acquiring and maintaining suitable employment. For the second year in a row, VA reported exceeding its key performance goal for this outcome. VA reported that 65 percent of the veterans who exited the VR&E program returned to work in fiscal year 2000—more than its goal of 60 percent. Also, VA reported its performance improved by 12 percentage points over its fiscal year 1999 performance. VA’s performance report clearly explains the initiatives it believes were responsible for exceeding the goal. For example, VA refocused the program to make the primary goal obtaining suitable employment, improved the assessment of veterans’ work skills transferable to the civilian labor market, and increased the number of placements in suitable jobs. To improve the credibility of VR&E’s performance information, VA continues to have regional office staff regularly review a sample of cases for quality and VA headquarters staff evaluate data for validity and reliability. VA provides reasonable and clear discussions of strategies to continue to place veterans in suitable employment. As part of these strategies, VA is changing the skill mix of its staff from vocational rehabilitation specialists to employment specialists, and from counseling psychologists to vocational rehabilitation counselors. In addition, VA has established a Blue Ribbon Panel to review the program’s policies and practices and evaluate them against best practices of other organizations. Although VA is responsible for VR&E, it partners with the Department of Labor’s (DOL) Veterans’ Employment and Training Service (VETS), that also helps veterans obtain training and employment. VA conducts joint training with DOL for VETS-funded state and local training and job placement staff. We have reported that VETS does not have clear goals and strategies for targeting veterans for employment assistance . We have made several recommendations to improve VETS, including that DOL clearly define the program’s target populations so that staff know where to place their priorities. For the selected key outcomes, this section describes improvements or remaining weaknesses in VA’s (1) fiscal year 2000 performance report in comparison with its fiscal year 1999 report, and (2) fiscal year 2002 performance plan in comparison with its fiscal year 2001 plan. It also discusses the degree to which VA’s fiscal year 2000 report and fiscal year 2002 plan address concerns and recommendations by the Congress, GAO, the Inspectors General, and others. VA made improvements to its performance report. For example, VA improved its discussion of major management challenges identified by GAO and VA’s IG. VA added a section describing its efforts to address major management challenges identified by GAO and VA’s IG. The Office of Inspector General, for each management challenge it identified, described the challenge and identified recommendations that VA has, and has not, implemented. For example, regarding inappropriate benefit payments, the IG noted that VA has implemented its recommendation to enter into a matching agreement with the Social Security Administration for prison records. However, the IG noted that VA has not yet implemented recommendations to identify and adjust the benefits of incarcerated veterans and dependents, recover overpayments to veterans who have been released from prison, and establish a method to ensure that regional offices properly adjust benefits for incarcerated veterans and dependents in a timely manner. Another improvement made by VA included reporting, for the first time, obligations by strategic goal. In addition, VA’s fiscal year 2000 performance report continues to provide reasonable discussions of its (1) progress in meeting key performance goals, (2) strategies for improving performance in the future, and (3) efforts to improve quality of performance data. We discussed these items previously under the key outcomes. Finally, VA provided a clearer understanding of the compensation and pension claims-processing timeliness in its fiscal year 2000 performance report compared to last year’s report. Although VA continued to report the combined performance of compensation and pension, in this year’s report VA also presented the performance data separately for each. VA made several improvements to its fiscal year 2002 performance plan. For example, VA has identified additional key measures it believes are important to assessing how well it is meeting the needs of veterans and their families. These include additional measures of patient safety, health care cost-effectiveness, and customer satisfaction with VA services. In general, VA continues to provide adequate discussions of strategies for improving future performance and update performance goals based on past performance. Also, VA provides additional information on (1) costs associated with meeting strategic goals and objectives, (2) efforts to improve data quality, and (3) ways to address major management challenges. VA’s fiscal year 2002 performance plan represents a significant change in the way VA measures its performance toward achieving its key outcome of providing quality health care to veterans at a reasonable cost. Starting with fiscal year 2001, VA will no longer have key measures for the percentage increase in the number of unique patients; the percentage decrease in per- patient costs; the decrease in the percentage of health care funding from alternative revenue streams; and the percentage of medical residents trained in primary care. VA is adding several new key performance measures to better assess progress toward achieving this outcome. For example, VA has designated the following as key measures: A measure related to patient safety—the percentage of root-cause analyses not correctly completed within 45 days of an adverse patient event. This is a quality measure, based on VA’s system for continuously improving patient safety at its medical facilities. When medical errors occur, VA medical staffs are required to prepare root-cause analyses to identify the reasons for these errors. This information, in turn, can be used to identify corrective actions. Two indexes of overall VA medical care that include elements of quality, patient access, customer satisfaction, and cost. According to VA, these measures represent more sophisticated ways to measure the efficiency of its medical care than the former key measure of cost per patient, because they measure not just efficiency in providing care, but efficiency in providing high-quality and accessible care that meets patients’ needs. These indexes include six other key goals in the fiscal year 2002 performance plan—the revised Chronic Disease Care and Prevention Indexes; the three appointment timeliness measures; and the inpatient and outpatient customer satisfaction measures—plus per-patient costs. VA reported that it, in general, makes changes to key measures (1) when actual performance has met or exceeded original strategic goals, (2) when further performance improvements are unlikely or unreasonable, (3) to ensure that measures are consistent with its strategic plan, and (4) when it develops better ways to measure its performance. VA continues to provide clear and reasonable discussions of strategies for improving performance and continues to revise its performance goals based on past performance. As previously discussed for each key outcome, VA provided strategies for how it expects to achieve its goals. VA described additional strategies in its plan that do not appear in the performance report. For example, VA will expand its initiative to process claims from active-duty service members awaiting discharge from military service. In addition, VA adjusted performance goals based on its fiscal year 2000 performance as well as external factors, such as new duty-to-assist legislation. For example, VA increased its fiscal year 2001 timeliness goal for processing of disability rating-related claims from 142 days to 202 days and established a goal of 273 days for fiscal year 2002. VA also revised its fiscal year 2001 claims-processing accuracy goal from 85 percent to 72 percent, and established a goal of 75 percent for fiscal year 2002. In its fiscal year 2002 performance plan, VA provides additional information on the estimated costs of meeting its fiscal year 2002 performance goals. The fiscal year 2001 performance plan included VA’s estimates of obligations needed to meet each of its strategic goals. VA’s fiscal year 2002 performance plan also provided estimated obligations by strategic objective. Because each of VA’s four main strategic goals covers multiple objectives related to different VA programs, presenting cost data by objective provides a clearer linkage of funding to achievement of performance goals. Meanwhile, VA continues to work with the Office of Management and Budget (OMB) on a plan to restructure its budget accounts, so VA’s budget presentations to the Congress can better link proposed funding with specific levels of performance. VA’s fiscal year 2002 performance plan includes a more detailed discussion of its efforts to improve the quality of its performance data. For example, the Veterans Health Administration identifies in more detail its initiatives to improve the quality of its data on patient care developed by its health care facilities. It has initiatives to improve the quality of coding at facilities to ensure that the care provided to veterans is being correctly recorded. The Veteran Benefits Administration also provided more detailed information on its data quality efforts. It created a Data Management Office to work with its program offices to identify strategies and initiatives to address the collection, processing, and storage of quality data. In the fiscal year 2002 performance plan, VA restructured its discussion of major management challenges to mirror the challenges identified by GAO in our January 2001 Performance and Accountability Series report on VA, and the challenges identified by VA’s IG in November 2000. For each of these challenges, VA provided information on the nature of the challenge, and the status of its efforts to resolve it. However, VA could have provided more specific discussions of its plans to address its major management challenges. For example, in it discussion of the challenges we identified for VA’s health care program, VA generally restated findings from our January 2001 report to describe its current status and future plans. VA addressed all six of the major management challenges identified by GAO, and generally described goals or actions that VA is taking or plans to take in response to them. GAO has identified two governmentwide high- risk areas: strategic human capital management and information security. VA has established strategies for achieving strategic goals and objectives for human capital management and information security. VA has established a performance goal and identified milestones for implementing certain strategies to address information security. However, VA has not identified performance goals and measures for human capital management linked to achieving programmatic results. In addition, GAO has identified four major management challenges facing VA. We found that VA’s performance report discussed the agency’s progress in resolving all of its challenges. Of the six major management challenges identified by GAO, its performance plan had (1) goals and measures that were directly related to four of the challenges, (2) goals and measures that were indirectly related to one of the challenges and (3) had no goals and measures related to one of the challenges, but discussed strategies to address it. Appendix I provides information on how VA addressed these challenges. As agreed, our evaluation was generally based on the requirements of GPRA, the Reports Consolidation Act of 2000, guidance to agencies from OMB for developing performance plans and reports (OMB Circular A-11, Part 2), previous reports and evaluations by us and others, our knowledge of VA’s operations and programs, GAO identification of best practices concerning performance planning and reporting, and our observations on VA’s other GPRA-related efforts. We also discussed our review with agency officials in the Office of Assistant Secretary for Management and with the VA Office of Inspector General. The agency outcomes that were used as the basis for our review were identified by the Ranking Minority Member of the Senate Governmental Affairs Committee as important mission areas for the agency and generally reflect the outcomes for all of VA’s programs or activities. The major management challenges confronting VA, including the governmentwide high-risk areas of strategic human capital management and information security, were identified by GAO in our January 2001 performance and accountability series and high- risk update, and were identified by VA’s IG in December 2000. We did not independently verify the information contained in the performance report and plan, although we did draw from other GAO work in assessing the validity, reliability, and timeliness of VA’s performance data. We conducted our review from April 2001 through June 2001 in accordance with generally accepted government auditing standards. VA generally agrees with the information presented in our report. However, VA was concerned that our report suggested that the Department’s performance plan was inadequate because, in some cases, it does not have performance goals and measures linked to each of the major management challenges contained in appendix I. For example, VA cited our statement that it has not identified performance goals and measures for human capital management linked to achieving programmatic results. VA believes that it is not necessary to develop and track quantifiable performance goals and measures for management challenges that are not strategic in nature. In these cases, VA believes that it is appropriate and sufficient to have a mitigation plan including milestones for completing remedial actions. (App. II contains VA’s written comments.) As we reported, VA’s performance plan identified actions for resolving each of its major management challenges, even when quantifiable goals and measures were not included. However, OMB Circular No. A-11 states, “Performance goals for management problems should be included in the annual plan, particularly for problems whose resolution is mission- critical….” In particular, the annual plan should include a performance goal(s) covering the major human resources strategies, such as recruitment, retention, and skill development and training, according to OMB guidance. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to appropriate congressional committees; the Secretary of Veterans Affairs; and the Director, Office of Management and Budget. We will also make copies available to others upon request. If you or your staff have any questions, please call me at (202) 512-7101. Key contributors to this report were Shelia Drake, Paul Wright, Walter Gembacz, Greg Whitney, John Borrelli, Valerie Melvin, J. Michael Resser, Mary J. Dorsey, Alana Stanfield, Steve Morris, and Bonnie McEwan. The following table identifies the major management challenges confronting the Department of Veterans Affairs (VA), which includes the governmentwide high-risk areas of strategic human capital management and information security. The first column lists the challenges identified by our office and/or VA’s Inspector General (IG). The second column discusses what progress, as discussed in its fiscal year 2000 performance report, VA made in resolving its challenges. The third column discusses the extent to which VA’s fiscal year 2002 performance plan includes performance goals and measures to address the challenges that we and the VA’s IG identified. We found that VA’s performance report discussed the agency’s progress in resolving all its challenges. Of the 16 major management challenges, its performance plan had (1) goals and measures that were directly related to 7 of the challenges, (2) goals and measures that were indirectly related to 2 of the challenges and (3) had no goals and measures related to 7 of the challenges, but discussed strategies to address them. Major management challenge GAO-designated governmentwide high risk Strategic human capital management: GAO has identified shortcomings at multiple agencies involving key elements of modern human capital management, including strategic human capital planning and organizational alignment; leadership continuity and succession planning; acquiring and developing staffs whose size, skills, and deployment meet agency needs; and creating results-oriented organizational cultures. In its report, VA recognizes that a comprehensive workforce planning initiative is essential for VA to remain as a provider of quality services to America’s veterans. An anticipated upswing in retirements, rapid changes in technology, an increasingly diverse labor and beneficiary pool, and different expectations of younger workers are forces that strongly suggest the need for new recruitment and retention practices to meet program goals. VA states it has established a workforce planning process, and is in the beginning stages of developing and implementing a workforce forecasting system. VA has a strategic goal, strategic objectives, and strategies to address human capital. However, they are not directly linked to program performance. The plan identifies improved workforce planning and enhancing accountability for performance as initiatives that will permit the agency to deliver “world-class” service to veterans and their families. VA has developed a workforce planning model, secured VA senior leadership approval of the model, and worked with its administrations to pilot the model. VA’s performance report noted that its succession planning strategy includes recruiting new staff, redirecting staff from other offices, and providing training. For example, VA hired over 450 new claims- processing staff during fiscal year 2000. In addition, VA plans to redirect 200 existing staff to claims-processing positions and hire nearly 250 new staff during fiscal year 2001. Major management challenge Information security: Our January 2001 high-risk update noted that agencies’ and governmentwide efforts to strengthen information security have gained momentum and expanded. Nevertheless, recent audits continue to show federal computer systems are riddled with weaknesses that make them highly vulnerable to computer-based attacks and place a broad range of critical operations and assets at risk of fraud, misuse, and disruption. Progress in resolving major management challenge as discussed in the fiscal year 2000 performance report VA has acknowledged the security weakness in its systems and data and reported information security controls as a material weakness in its Federal Managers Financial Integrity Act report for 2000. To address the department’s information security control issues, VA noted in its performance report that it had established a centrally managed agency-wide security program. In addition, the department issued a revised information security plan in October 2000 that identified a number of security enhancements that were being accelerated to improve agency-wide information security. These included enhancements to: (1) security awareness, (2) risk assessments, (3) security policies, (4) security officer training, and (5) system certification. Applicable goals and measures in the fiscal year 2002 performance Yes. VA has developed corrective action plans to address the information security weaknesses. These plans were in various stages of implementation. VA’s performance plan noted that VA established a performance indicator to measure progress in implementing its information security program. The department targeted this program to be 20 percent complete by fiscal year 2001 and 80 percent complete by fiscal year 2002. However, this measurement does not assess the effectiveness of VA’s security, a more effective measure of program success. As we have previously reported, the VA information security management plan generally includes the key elements of an effective security management program. However, the success of VA’s efforts to improve the department’s computer security will depend largely on adequate program resources and commitment throughout the department. The Chief Information Officers Council, in coordination with the National Institute of Standards and Technology and the Office of Management and Budget, has developed a framework for agencies to use in determining the current status of information systems controls and, where necessary, to establish a target for improvement. VA could use this framework as a means for measuring progress in improving its information security program. Discussed under outcomes in the report. Discussed under outcomes in the report. Discussed under outcomes in the report. Yes. Discussed under outcomes in the report. Yes. Discussed under outcomes and comparison of performance plans in the report. VA’s plan has measures for alternative revenues and for conducting studies to assess and realign its health care system. Yes. Discussed under outcomes in the report. Progress in resolving major management challenge as discussed in the fiscal year 2000 performance report Budget: Not addressed in report. Applicable goals and measures in the fiscal year 2002 performance None. VA’s budget systems need to be aligned technology to help serve veterans need improvement. Performance-based budgeting: VA and OMB staff jointly developed a proposal to restructure VA’s budget accounts to facilitate charging each program’s budget account for all of the significant resources used to operate and produce its outcomes and outputs. VA is continuing to work with major stakeholders on implementation issues. weaknesses remain despite unqualified audit opinion. Information technology (IT): VA implemented a capital investment process that the department uses to select, control, and evaluate IT investments. The department reviews IT projects that exceed planned expenditures by 10 percent to determine whether to change the scope of funding or terminate the project. The report did not address progress VA made in developing a departmentwide IT architecture, business process reengineering, or the need to obtain a full- time chief information officer. Financial management: VA described a number of information security enhancements as described under Information Security challenge. Information technology: VA has many initiatives planned or in progress. For example, VA is taking steps to develop an architecture that will promote departmentwide interoperability and data sharing. VA stated that it has completed the technical component of this architecture and is in the process of developing the logical component. In addition, VA stated that efforts are underway to improve its capital investment process in response to GAO recommendations. Also, VA stated that it is reevaluating its previous decision to leave business process reengineering at the administration level. However, VA provided little information on its effort to obtain a full- time chief information officer or when one would be appointed. Financial management: VA has developed corrective action plans to address information security weaknesses, which are in various stages of implementation. Discussed under outcomes in the report. VA has initiated changes to its resource allocation method to correct resource and infrastructure imbalances, has given VA managers authority to reduce physician levels in overstaffed specialties, and is implementing a cost-based data system to provide more useful performance measurement information on resources and clinical and administrative workloads. Yes. Discussed under outcomes in the report. None. Resource allocation continues to be a major public policy issue. VA management is addressing staffing and other resource allocation disparities as part of various initiatives to restructure its health care system. VA is implementing IG recommendations regarding Decision Support System standardization. Progress in resolving major management challenge as discussed in the fiscal year 2000 performance report Claims processing and appeals processing discussed under outcomes in the report. Applicable goals and measures in the fiscal year 2002 performance Yes for claims processing and appeals processing: Discussed under outcomes in the report. VA has implemented VA’s IG recommendations aimed at ensuring VHA performs complete medical examinations. VA is also evaluating the use of contract examinations. VA has several initiatives in various stages of implementation that address inappropriate benefit payments. For example, VA asked the IG to identify internal control weaknesses that might facilitate or contribute to fraud in the compensation and pension program. The IG found vulnerabilities involving numerous technical, procedural, and policy issues; VA has agreed to initiate corrective actions. VA reports that it has completed audits of the quality of data used to compute three of the current key performance measures including, among others, rating-related claims-processing timeliness. Audits are underway regarding the measures for the Prevention Index and the Chronic Disease Care Index. VA reports taking corrective actions on deficiencies identified. However, VA notes that it continues to find significant problems with data input and weaknesses in information security, which limit VA’s confidence in the quality of the data. None for timeliness and quality of medical examinations. However, VA has established standards of performance for reducing the number of incomplete examination for its field offices. None. The plan discusses initiatives to identify and adjust payments to, among others, veterans who receive dual compensation, underreport income, or are incarcerated or deceased. For example, VA is starting or continuing a variety of computer matches with other agencies’ records to identify inappropriate payments. However, VA still has been unable to offset disability compensation against military reserve pay for all persons who receive both payments. Procedures established between DOD and VA have not been effective or fully implemented. DOD is having difficulties obtaining the accurate data from military services that VA needs to carry out the offsets. None. VA began taking action to correct deficiencies in its data. Management officials continue to refine procedures for compiling performance data. Performance data are receiving greater scrutiny within the department, and procedures are being developed to enhance data validation. Progress in resolving major management challenge as discussed in the fiscal year 2000 performance report VA reported that it has developed corrective action plans for the information security control weaknesses, with corrective actions to be completed by 2002. While VA established a system to track the resolution of security weaknesses identified, as we have previously reported,the department does not have a process to ensure that corrective actions were effective. Applicable goals and measures in the fiscal year 2002 performance Yes. Discussed under Information Security challenge. VA consolidated financial statements: Material internal control weaknesses exist related to information security, housing credit assistance, and fund balances with Treasury reconciliations. Also discussed under Information Security challenge. VA resolved two of the three weaknesses. However, the information security weakness remains unresolved. For additional actions, see Information Security above. None. The report acknowledges the debt management weakness. VA has initiated actions, such as a one- time review of all open/active cases, to correct fraud and abuse but the report did not identify the extent to which improvements have been made. VA does not have goals or measures directly applicable to resolving material weaknesses reported in its financial statement audit report. However, the plan has a performance measure that indirectly addresses the information security material weakness. VA has developed corrective action plans for the information security and control issues and expects to complete corrective actions in 2002. None. VA identified actions that it expects will result in a significant improvement in collections, such as installing computer software to facilitate referral of debt to the Department of Treasury Offset Program. None. VA recently completed a one-time review of all open/active cases. VA identified 255 cases as potentially fraudulent. VA is implementing other programs to prevent or identify fraud in the future, such as identifying workers compensation claimants who are also receiving VA compensation and pension benefits to prevent dual payments. Progress in resolving major management challenge as discussed in the fiscal year 2000 performance report The report identified savings of $13 million in fiscal year 2000 attributed to aggressive use of the governmentwide purchase card. However, the IG identified significant vulnerabilities regarding its use, including circumventing competition requirements and payment of excessive prices. Applicable goals and measures in the fiscal year 2002 performance None. VA is conducting business reviews of all acquisition and materiel management functions at VA facilities to resolve problems in this area. | This report reviews the Department of Veterans Affairs (VA) fiscal year 2000 performance report and fiscal year 2002 performance plan required by the Government Performance and Results Act of 1993 to assess VA's process in achieving selected key outcomes that are important to its mission. VA reported making mixed progress towards achieving its key outcomes. For example, VA reported that it made good progress in providing high-quality care to patients, but it did not achieve its goal of processing veterans' benefits claims in a timely manner. GAO found out that VA made several improvements to its fiscal year 2000 performance report and 2002 performance plan. These improvements resulted in clearer discussions of VA's management challenges and additional performance measures for assessing program achievement. Furthermore, VA addressed all six of the major management challenges previously identified by GAO, and generally described goals or actions that VA is taking or plans to take in response to them. VA has established strategies for achieving strategic goals and objectives for two of these challenges: human capital management and information security. VA has established a performance goal and identified milestones for implementing certain strategies to address information security. However, VA has not identified performance goals and measures for human capital management linked to achieving programmatic results. |
Several federal laws and policies—predominantly the Federal Information Security Modernization Act of 2014 and its predecessor, the Federal Information Security Management Act of 2002 (both referred to as FISMA)—provide a framework for protecting federal information and IT assets. The purpose of both laws is to provide a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. The laws establish responsibilities for implementing the framework and assign those responsibilities to specific officials and agencies: The Director of the Office of Management and Budget (OMB) is responsible for developing and overseeing implementation of policies, principles, standards, and guidelines on information security in federal agencies, except with regard to national security systems. Since 2003, OMB has issued policies and guidance to agencies on many information security issues, including providing annual instructions to agencies and inspectors general for reporting on the effectiveness of agency security programs. More recently, OMB issued the Cybersecurity Strategy and Implementation Plan for the Federal Civilian Government in October 2015, which aims to strengthen federal civilian cybersecurity by (1) identifying and protecting high- value information and assets, (2) detecting and responding to cyber incidents in a timely manner, (3) recovering rapidly from incidents when they occur and accelerating the adoption of lessons learned, (4) recruiting and retaining a highly qualified cybersecurity workforce, and (5) efficiently acquiring and deploying existing and emerging technology. OMB also recently updated its Circular A-130 on managing federal information resources to address protecting and managing federal information resources and on managing PII. The head of each federal agency has overall responsibility for providing appropriate information security protections for the agency’s information and information systems, including those collected, maintained, operated or used by others on the agency’s behalf. In addition, the head of each agency is required to ensure that senior agency officials provide information security for the information and systems supporting the operations and assets under their control, and the agency chief information officer (CIO) is delegated the authority to ensure compliance with the law’s requirements. The assignment of information security responsibilities to senior agency officials is noteworthy because it reinforces the concept that information security is a business function as well as an IT function. Each agency is also required to develop, document, and implement an agency-wide information security program that involves an ongoing cycle of activity including (1) assessing risks, (2) developing and implementing risk-based policies and procedures for cost-effectively reducing information security risk to an acceptable level, (3) providing awareness training to personnel and specialized training to those with significant security responsibilities, (4) testing and evaluating effectiveness of security controls, (5) remedying known weaknesses, and (6) detecting, reporting, and responding to security incidents. As discussed later, our work has shown that agencies have not fully or effectively implemented these programs and activities on a consistent basis. FISMA requires the National Institute of Standards and Technology (NIST) to develop information security standards and guidelines for agencies. To this end, NIST has developed and published federal information processing standards that require agencies to categorize their information and information systems according to the impact or magnitude of harm that could result if they are compromised and specify minimum security requirements for federal information and information systems. NIST has also issued numerous special publications that provide detailed guidelines to agencies for securing their information and information systems. In 2014, FISMA established the Department of Homeland Security’s (DHS) oversight responsibilities, including (1) assisting OMB with oversight and monitoring of agencies’ information security programs, (2) operating the federal information security incident center, and (3) providing agencies with operational and technical assistance. Other cybersecurity-related laws were recently enacted, which include the following: The National Cybersecurity Protection Act of 2014 codifies the role of DHS’s National Cybersecurity and Communications Integration Center as the federal civilian interface for sharing information about cybersecurity risks, incidents, analysis, and warnings for federal and non-federal entities, including owners and operators of systems supporting critical infrastructure. The Cybersecurity Enhancement Act of 2014, among other things, authorizes NIST to facilitate and support the development of voluntary standards to reduce cyber risks to critical infrastructure and, in coordination with OMB, to develop and encourage a strategy for the adoption of cloud computing services by the federal government. The Cybersecurity Act of 2015, among other things, sets forth authority for enhancing the sharing of cybersecurity-related information among federal and non-federal entities, gives DHS’s National Cybersecurity and Communications Integration Center responsibility for implementing these mechanisms, requires DHS to make intrusion and detection capabilities available to any federal agency, and calls for agencies to assess their cyber-related workforce. Our work has identified the need for improvements in the federal government’s approach to cybersecurity. While the administration and agencies have acted to improve the protections over their information and information systems, additional actions are needed. Federal agencies need to effectively implement risk-based entity- wide information security programs consistently over time. Since FISMA was enacted in 2002, agencies have been challenged to fully and effectively develop, document, and implement agency-wide programs to secure the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency or contractor. For example, in fiscal year 2015, 19 of the 24 major federal agencies covered by the Chief Financial Officers Act of 1990 reported that information security control deficiencies were either a material weakness or significant deficiency in internal controls over financial reporting. In addition, inspectors general at 22 of the 24 agencies cited information security as a major management challenge for their agency. The following actions will assist agencies in implementing their information security programs. Enhance capabilities to effectively identify cyber threats to agency systems and information. A key activity for assessing cybersecurity risk and selecting appropriate mitigating controls is the identification of cyber threats to computer networks, systems, and information. In 2016, we reported on several factors that agencies identified as impairing their ability to identify these threats to a great or moderate extent. The impairments included an inability to recruit and retain personnel with the appropriate skills, rapidly changing threats, continuous changes in technology, and a lack of government-wide information-sharing mechanisms. Addressing these impairments will enhance the ability of agencies to identify the threats to their systems and information and be in a better position to select and implement appropriate countermeasures. Implement sustainable processes for securely configuring operating systems, applications, workstations, servers, and network devices. We routinely determine that agencies do not enable key information security capabilities of their operating systems, applications, workstations, servers, and network devices. Agencies were not always aware of the insecure settings that introduced risk to the computing environment. Establishing strong configuration standards and implementing sustainable processes for monitoring and enabling configuration settings will strengthen the security posture of federal agencies. Patch vulnerable systems and replace unsupported software. Federal agencies consistently fail to apply critical security patches in a timely manner on their systems, sometimes years after the patch is available. We also consistently identify instances where agencies use software that is no longer supported by their vendors. These shortcomings often place agency systems and information at significant risk of compromise since many successful cyberattacks exploit known vulnerabilities associated with software products. Using vendor-supported and patched software will help to reduce this risk. Develop comprehensive security test and evaluation procedures and conduct examinations on a regular and recurring basis. The information security assessments performed for agency systems were sometimes based on interviews and document reviews, limited in scope, and did not identify many of the security vulnerabilities that our examinations identified. Conducting in-depth security evaluations that examine the effectiveness of security processes and technical controls is essential for effectively identifying system vulnerabilities that place agency systems and information at risk. Strengthen oversight of contractors providing IT services. As demonstrated by the Office of Personnel Management data breach of 2015, cyber attackers can sometimes gain entrée to agency systems and information through the agency’s contractors or business partners. Accordingly, agencies need to ensure that their contractors and partners are adequately protecting the agency’s information and systems. In August 2014, we reported that five of six selected agencies were inconsistent in overseeing the execution and review of security assessments that were intended to determine the effectiveness of contractor implementation of security controls, resulting in security lapses. In 2016, agency chief information security officers (CISO) we surveyed reported that they were challenged to a large or moderate extent in overseeing their IT contractors and receiving security data from the contractors, thereby diminishing the CISOs’ ability to assess how well agency information maintained by the contractors is protected. Effectively overseeing and reviewing the security controls implemented by contractors and other parties is essential to ensuring that the organization’s information is properly safeguarded. The federal government needs to improve its cyber incident detection, response, and mitigation capabilities. Even agencies or organizations with strong security can fall victim to information security incidents due to previously unknown vulnerabilities that are exploited by attackers to intrude into an agency’s information systems. Accordingly, agencies need to have effective mechanisms for detecting, responding to, and recovering from such incidents. The following actions will assist the federal government in building its capabilities for detecting, responding to, and recovering from security incidents. DHS needs to expand capabilities, improve planning, and support wider adoption of its government-wide intrusion detection and prevention system. In January 2016, we reported that DHS’s National Cybersecurity Protection System (NCPS) had limited capabilities for detecting and preventing intrusions, conducting analytics, and sharing information. In addition, adoption of these capabilities at federal agencies was limited. Expanding NCPS’s capabilities for detecting and preventing malicious traffic, defining requirements for future capabilities, and developing network routing guidance would increase assurance of the system’s effectiveness in detecting and preventing computer intrusions and support wider adoption by agencies. Improve cyber incident response practices at federal agencies. In April 2014 we reported that 24 major federal agencies did not consistently demonstrate that they had effectively responded to cyber incidents. For example, agencies did not determine the impact of incidents or take actions to prevent their recurrence. By developing complete policies, plans, and procedures for responding to incidents and effectively overseeing response activities, agencies will have increased assurance that they will effectively respond to cyber incidents. Update federal guidance on reporting data breaches and develop consistent responses to breaches of personally identifiable information (PII). As we reported in December 2013, eight selected agencies did not consistently implement policies and procedures for responding to breaches of PII. For example, none of the agencies documented the evaluation of incidents and lessons learned. In addition, OMB’s guidance to agencies to report each PII-related incident—even those with inherently low risk to the individuals affected—within 1 hour of discovery may cause agencies to expend resources to meet reporting requirements that provide little value and divert time and attention from responding to breaches. Updating guidance and consistently implementing breach response practices will improve the effectiveness of government-wide and agency-level data breach response programs. The federal government needs to expand its cyber workforce planning and training efforts. Ensuring that the government has a sufficient number of cybersecurity professionals with the right skills and that its overall workforce is aware of information security responsibilities remains an ongoing challenge. These actions can help meet this challenge: Enhance efforts for recruiting and retaining a qualified cybersecurity workforce. This has been a long-standing dilemma for the federal government. In 2012, agency chief information officers and experts we surveyed cited weaknesses in education, awareness, and workforce planning as a root cause in hindering improvements in the nation’s cybersecurity posture. Several experts also noted that the cybersecurity workforce was inadequate, both in numbers and training. They cited challenges such as the lack of role-based qualification standards and difficulties in retaining cyber professionals. In 2016, agency CISOs we surveyed reported that difficulties related to having sufficient staff; recruiting, hiring, and retaining security personnel; and ensuring security personnel have appropriate skills and expertise pose challenges to their abilities to carry out their responsibilities effectively. Improve cybersecurity workforce planning activities at federal agencies. In November 2011, we reported that only five of eight selected agencies had developed workforce plans that addressed cybersecurity. Further, agencies reported challenges with filling cybersecurity positions, and only three of the eight had a department- wide training program for their cybersecurity workforce. In summary, federal law and policy set forth a framework for addressing cybersecurity risks to federal systems. However, implementation of this framework has been inconsistent, and additional action is needed to address ongoing challenges. Specifically, agencies need to address control deficiencies and fully implement organization-wide information security programs, cyber incident response and mitigation efforts need to be improved across the government, and establishing and maintaining a qualified cybersecurity workforce needs to be a priority. Chairman Donilon, Vice Chair Palmisano, and distinguished members of the Commission, this concludes my prepared statement. I would be happy to answer any questions you have. If you have any questions about this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected]. Other staff members who contributed to this statement include Larry Crosland and Michael Gilmore (assistant directors), Chris Businsky, Franklin Jackson, Kenneth A. Johnson, Lee McCracken, Scott Pettis, and Adam Vodraska. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The dependence of federal agencies on computerized information systems and electronic data makes them potentially vulnerable to a wide and evolving array of cyber-based threats. Securing these systems and data is vital to the nation's safety, prosperity, and well-being. Because of the significance of these risks and long-standing challenges in effectively implementing information security protections, GAO has designated federal information security as a government-wide high-risk area since 1997. In 2003 this area was expanded to include computerized systems supporting the nation's critical infrastructure, and again in February 2015 to include protecting the privacy of personally identifiable information collected, maintained, and shared by both federal and nonfederal entities. GAO was asked to provide a statement on laws and policies shaping the federal IT security landscape and actions needed for addressing long-standing challenges to improving the nation's cybersecurity posture. In preparing this statement, GAO relied on previously published work. Over the past several years, GAO has made about 2,500 recommendations to federal agencies to enhance their information security programs and controls. As of September 16, 2016, about 1,000 have not been implemented. Cyber incidents affecting federal agencies have continued to grow, increasing about 1,300 percent from fiscal year 2006 to fiscal year 2015. Several laws and policies establish a framework for the federal government's information security and assign implementation and oversight responsibilities to key federal entities, including the Office of Management and Budget, executive branch agencies, and the Department of Homeland Security (DHS). However, implementation of this framework has been inconsistent, and additional actions are needed: Effectively implement risk-based information security programs. Agencies have been challenged to fully and effectively establish and implement information security programs. They need to enhance capabilities to identify cyber threats, implement sustainable processes for securely configuring their computer assets, patch vulnerable systems and replace unsupported software, ensure comprehensive testing and evaluation of their security on a regular basis, and strengthen oversight of IT contractors. Improve capabilities for detecting, responding to, and mitigating cyber incidents. Even with strong security, organizations can continue to be victimized by attacks exploiting previously unknown vulnerabilities. To address this, DHS needs to expand the capabilities and adoption of its intrusion detection and prevention system, and agencies need to improve their practices for responding to cyber incidents and data breaches. Expand cyber workforce and training efforts. Ensuring that the government has a sufficient cybersecurity workforce with the right skills and training remains an ongoing challenge. Government-wide efforts are needed to better recruit and retain a qualified cybersecurity workforce and to improve workforce planning activities at agencies. |
The Army National Guard and the Army Reserve are composed primarily of citizen soldiers who serve in the military on a part-time basis, balancing the demands of civilian careers with their military service. Collectively, these part-time forces make up more than 50 percent of the Army’s total force, and these soldiers at any time may be called upon to meet a full spectrum of defense requirements and operations around the globe. In addition to these part-time forces, both the Army National Guard and the Army Reserve use full-time personnel for duties that can include pay processing, personnel actions, preparing and monitoring training schedules, and other tasks that cannot be effectively executed through the use of part-time personnel. While similarly composed of mostly part- time forces, the Army National Guard and the Army Reserve have distinct missions and organizational structures. Army National Guard: The Army National Guard has a dual role as both a state and a federal force and may be called upon to provide trained and equipped units to (1) defend the 54 states and territories of the United States, and (2) respond to overseas combat missions, counterdrug efforts, reconstruction missions, and other operations as needed. When mobilized for a federal mission, the Army National Guard is under the command and control of the President. When they are not conducting a federal mission, Army National Guard units are under the control of the governors for state responsibilities. In addition, Army National Guard forces can be mobilized under Title 32 of the United States Code for certain federally funded, domestic missions conducted under the command of the governors. Past missions included providing security at the nation’s airports in the immediate aftermath of the September 11 terrorist attacks and assisting the Gulf Coast in the aftermath of Hurricane Katrina. The Chief of the National Guard Bureau is responsible for creating and implementing policy and guidance so that National Guard servicemembers meet the overarching standards set by DOD. In addition, the Chief of the National Guard Bureau is responsible for ensuring that Army National Guard soldiers are accessible, capable, and ready to protect the homeland and to provide combat resources to the Army. Army Reserve: The Army Reserve is a federal force that is organized primarily to provide operational support to combat forces. The Office of the Chief of the Army Reserve and the U.S. Army Reserve Command are commanded by the same Lieutenant General who, by law, is a member of the Headquarters, Department of the Army Staff. The Chief of the Army Reserve is generally responsible for advising the Secretary of the Army and the Chief of Staff of the Army on all issues related to the Army Reserve Command. In response to statutory requirements, in 2005 DOD established its sexual assault prevention and response program to promote the prevention of sexual assault, encourage increased reporting of such incidents, and improve victim response capabilities. DOD’s Sexual Assault Prevention and Response Office (SAPRO) serves as the department’s single point of authority, accountability, and oversight for its sexual assault prevention and response program. SAPRO provides the military services with guidance and technical support, and develops programs, policies, and training standards for the prevention and reporting of, and response to, sexual assault. Other responsibilities include overseeing the department’s collection and maintenance of data on reported sexual assault incidents involving servicemembers; establishing mechanisms to measure the effectiveness of the department’s program; and preparing the department’s mandated annual reports to Congress on sexual assaults involving servicemembers. DOD’s program allows servicemembers to make a restricted or unrestricted report of sexual assault. Specifically, DOD’s restricted reporting option is designed to allow sexual assault victims to confidentially disclose an alleged sexual assault to selected individuals without initiating an official investigation and to receive medical and mental health care. In cases where a victim elects restricted reporting, first responders may not disclose confidential communications to law enforcement or command authorities unless certain exceptions apply. Improper disclosure of confidential communications and medical information may result in discipline pursuant to the Uniform Code of Military Justice or other adverse personnel actions. In contrast, DOD’s unrestricted reporting option triggers an investigation by a military criminal investigative organization. Each military service provides specific guidance on sexual assault, as well as standard operating and reporting procedures for responding to alleged sexual assault incidents. In contrast to the other services, the Department of the Army is the only military service to combine its efforts to prevent and respond to incidents of sexual assault as well as sexual harassment. Specifically, based on the view that sexual harassment is a potential precursor to sexual assault, the Secretary of the Army directed that Army efforts to address sexual assault and sexual harassment be restructured and integrated. Pursuant to this direction, the Army in 2009 established what is currently known as its Sexual Harassment/Assault Response and Prevention (SHARP) program. The Army National Guard and the Army Reserve have primarily used DOD and Department of the Army policies to implement their respective sexual assault prevention and response programs. The Army’s sexual assault prevention and response guidance is currently a chapter in the Army’s general personnel regulation, and Department of the Army SHARP officials told us that they are in the process of updating this regulation to consolidate guidance currently contained in several different Army policies, directives, and other related documents. In addition, the officials said that they are developing a new Army regulation that will be focused solely on the SHARP program, with separate chapters for the Army National Guard and the Army Reserve. The officials expect this new guidance to be issued in May 2017. Under DOD’s and the Army’s current sexual assault prevention and response policies, members of the Army National Guard and the Army Reserve who are sexually assaulted while in certain duty statuses are subject to the same provisions and are eligible for the same services provided to active-duty servicemembers. For example, members of the Army National Guard and the Army Reserve who are sexually assaulted while performing active-duty service (as defined by Section 101(d)(3) of Title 10, U.S. Code) or inactive-duty training are generally eligible for DOD-provided medical treatment and counseling for injuries and illness incurred from a sexual assault. Members of the Army National Guard and the Army Reserve who report a sexual assault that occurred prior to or outside of active-duty service or inactive-duty training are eligible for some benefits, but not the full range of services. For example, all reserve- component members—regardless of their duty status at the time of the assault—may file a restricted or unrestricted report; are eligible for timely access to advocacy services from a SARC and a VA and appropriate nonmedical referrals, if requested; and have access to Special Victims Counsel. However, reserve-component members that report a sexual assault that occurred prior to or outside of active-duty service or inactive- duty training are not eligible for medical treatment provided or paid for by DOD. Detailed data on sexual assault incidents involving members of the Army National Guard and Army Reserve appear in appendix IV. Various offices and personnel within DOD and the Department of the Army play a role in preventing and responding to sexual assault incidents. Under Secretary of Defense for Personnel and Readiness: The Under Secretary of Defense for Personnel and Readiness is responsible for developing the overall policy and guidance for the department’s sexual assault prevention and response program, except for criminal investigative policy matters assigned to the Judge Advocates General of the military departments, the Staff Judge Advocate to the Commandant of the Marine Corps, and the DOD Inspector General and for legal processes in the Uniform Code of Military Justice. The Under Secretary of Defense for Personnel and Readiness oversees SAPRO. Assistant Secretary of Defense for Health Affairs: The Assistant Secretary of Defense for Health Affairs is generally responsible for advising the Under Secretary of Defense for Personnel and Readiness on DOD’s sexual assault health-care policies, clinical practice guidelines, and related procedures and standards of DOD health-care programs for sexual assault victims. Army Deputy Chief of Staff, Army G-1 (Personnel): The Deputy Chief of Staff, Army G-1, is generally responsible for Army-wide policies and the overall implementation, evaluation, and assessment of the sexual assault prevention and response program. Director, Department of the Army’s SHARP Office: The Director is responsible for program-management functions that include coordinating policy development and training requirements; ensuring that periodic program evaluations and assessments are conducted; and collecting, recording, and maintaining data on sexual assault cases. Director of the Army National Guard and Chief of the Army Reserve: The heads of the Army reserve components are generally responsible for developing, implementing, and monitoring sexual assault prevention and response policies and programs in their respective components. Sexual Assault Response Coordinators (SARC): SARCs—military or civilian employees at the discretion of the military services—serve as the single point of contact for coordinating appropriate and responsive care for sexual assault victims at an installation or within a geographic area. SARCs oversee sexual assault awareness, prevention, and response training; coordinate medical treatment, including emergency care, for victims of sexual assault; and track the services provided to a victim of sexual assault from the initial report through final disposition and resolution. Victim Advocates (VA): Victim advocates (VA) report directly to the SARC when performing victim advocacy duties and may provide nonclinical crisis intervention, referral, and on-going nonclinical support to adult sexual assault victims, such as providing information on available options and resources to victims, and providing liaison assistance with other organizations and agencies on victim care matters. Other Sexual Assault Responders: DOD’s instruction identifies other responders, including judge advocates, medical and mental health providers, criminal investigative personnel, law-enforcement personnel, and chaplains, and specifies that commanders, supervisors, and managers at all levels are responsible for the effective implementation of both the policy and the program. The Army National Guard and the Army Reserve have implemented sexual assault prevention and response programs, but face challenges in areas such as staffing, budget management, and investigation timeliness that may hinder program implementation. Specifically, the Army National Guard and the Army Reserve have staffed their sexual assault prevention and response programs, but the number, distribution, and types of personnel assigned to these positions has produced challenges that may limit the responsiveness of SARCs and VAs. Further, limited oversight of budget development and execution may also impede effective program implementation in the Army National Guard and the Army Reserve. Finally, the authority to investigate sexual assault cases involving reserve-component members varies, depending on duty status and location, and the timeliness of investigations of some cases involving Army National Guard soldiers has posed a challenge. The Army National Guard and the Army Reserve have provided full-time and collateral duty staff for their sexual assault prevention and response programs; however, their approach to the number and distribution of personnel assigned to the full-time positions and low fill rates for the collateral duty positions may hinder their ability to achieve program objectives. Further, the use of military technicians to fill full-time positions poses challenges for program implementation due to their dual status role, a prohibition from performing civilian duties while serving in military capacity, and a requirement to provide compensatory time for after-hours work. However, National Guard, Army Reserve, and Department of the Army leadership have not evaluated how the existing mix and types of full-time and collateral-duty staff affects program implementation, or their ability to achieve program objectives. The National Guard and the Army Reserve have staffed their sexual assault prevention and response programs with a mix of full-time and collateral-duty personnel, but their staffing approach has produced sizeable workload disparities among full-time program personnel, and the collateral-duty positions have not been fully filled. Army Regulation 600- 20 specifies that the Chief of the National Guard Bureau and the Chief of the U.S. Army Reserve will establish requisite staff positions within their organizations and make resources available to adequately implement program requirements, among other things. The National Defense Authorization Act for Fiscal Year 2012 directed that at least one full-time SARC and one full-time VA be assigned to each brigade or equivalent unit level of the armed forces. The National Guard and the Army Reserve have applied different interpretations of this requirement in their assignment of full-time SARCs and VAs to manage sexual assault prevention and response efforts in their respective components. Specifically, the National Guard authorized one full-time SARC and one full-time VA for the Joint Force Headquarters in each of the United States’ 54 states and territories for a total of 108 full-time personnel. The Army Reserve assigned its full-time personnel so that they are co-located with its major commands, but it interpreted the statutory sizing construct as having different applicability to SARCs and VAs and did not establish both full-time positions at every location. Instead, the Army Reserve assigned a full-time SARC to each of its 35 major commands, whereas only 13 of these locations also have a full-time VA. While such decisions fall within their designated authorities, we found that the National Guard’s and Army Reserve’s current approaches to staffing pose several challenges to program implementation. The National Guard’s decision to allot the same number of full-time staff to each state and territory has produced varying levels of responsibility among individuals hired for the same position. Specifically, Rhode Island has just over 1,000 square miles of land area and an Army National Guard population of about 2,000 soldiers and is assigned the same number of staff—one full-time SARC and one full-time VA—as Texas, which has more than 260,000 square miles of land area and a Guard population of about 18,600 soldiers. Figure 1 identifies the size of the Army National Guard population served in each state and territory to further illustrate the varying magnitude of responsibilities among full-time SARCs and VAs serving in the Army National Guard. According to Army National Guard officials, each state and territory operates independently, which limits the National Guard’s ability to shift or realign SARC and VA positions from one state to another. Further, officials stated that there were no additional personnel authorized to meet the statutory requirement to establish SARC and VA positions, and individual states have reallocated existing full-time support authorizations to fill these positions. Similar imbalances exist among the full-time program personnel assigned to the Army Reserve’s major commands. For example, the 807th Medical Command has one full-time SARC who is responsible for more than 9,000 soldiers assigned to units located in 16 states, whereas the 81st Regional Support Command has one full-time SARC who is responsible for fewer than 300 soldiers located in 4 states. Figure 2 lists major commands and their program staff according to the size of the population served to further illustrate the varying magnitude of responsibilities for full- time SARCs and VAs serving in the Army Reserve. Officials we interviewed in the Department of the Army, the Army National Guard, and the Army Reserve stated that workload disparities are mitigated by using collateral-duty personnel, who perform SARC and VA functions as a secondary responsibility to their primary military occupation. However, Army National Guard and Army Reserve officials explained that the part-time nature of reserve-component service provides members limited time to complete the responsibilities of their primary occupation, much less collateral duties. Further, we found the actual number of collateral-duty SARCs and VAs available to assist full- time staff was less than the number authorized. Specifically, data provided as of October 2016 showed that the Army National Guard had assigned 237 collateral duty SARCs, or 89 percent of the 266 positions authorized; and had assigned 1,388 collateral duty VAs, or 78 percent of the 1,790 positions that had been authorized. In our survey of full-time SARCs and VAs, 23 out of 68 Army National Guard respondents (34 percent) similarly reported that there were too few screened and credentialed collateral-duty SARCs for their current workload, and 30 out of 68 (44 percent) said that they had too few screened and credentialed collateral-duty VAs for their current workload. Army Reserve officials said that they currently do not have a process for tracking the total number of collateral duty SARCs and VAs in the Army Reserve, or how many of those positions are filled or vacant. Our visits to selected installations suggested that a substantial gap exists between the actual and authorized number of collateral duty SARCs and VAs. For example, we visited one of the Army Reserve’s major commands and were told that it had 7 trained collateral-duty VAs out of an authorization for more than 200. During a visit to another major command, an official told us it has 136 collateral-duty VAs out of an authorization for 328 and that the command often relies on and uses another military service’s SARCs to mitigate the effect of the shortage. In our survey of full-time SARCs and VAs, 16 out of 27 Army Reserve respondents (59 percent) said that they had too few screened and credentialed collateral-duty SARCs for their current workload, and 21 out of 27 (78 percent) said that they had too few screened and credentialed collateral-duty VAs for their current workload. The National Guard’s and the Army Reserve’s heavy reliance on dual- status military technicians to fill the full-time SARC and VA positions in their components also poses challenges for program implementation in three areas, due to: (1) their dual-status role, (2) a prohibition from performing civilian duties while serving in a military capacity, and (3) a law mandating compensatory time for after-hours work. Dual-status role: The majority of military technicians are designated as “dual-status,” which requires that they maintain membership in a reserve component as a condition of their employment. As of October 2016, about 70 percent of full-time SARC and VA positions in the National Guard were filled by dual-status technicians, and Army Reserve officials told us that all of the full-time SARC and VA positions in the Army Reserve were filled by dual-status technicians. However, aspects of the military technician occupation limit the ability of those serving in these positions from effectively executing the role and responsibilities of a SARC or VA. For example, we identified instances in which technicians would serve as a SARC or VA in their civilian capacity and then would serve as part of the unit command team when in their military capacity. This is problematic because according to DOD policy only selected individuals—to include SARCs and VAs—are authorized to receive a restricted or confidential report of sexual assault from a victim that will not disclose the name of the victim to the chain of command. In our survey of full-time SARCs and VAs, 10 out of 61 Army National Guard respondents (16 percent) and 6 out of 24 Army Reserve respondents (25 percent) indicated that they served as part of the unit command team when on military duty. Prohibition from performing civilian duties while serving in a military capacity: Under DOD policy, military technicians are prohibited from performing their civilian duties while serving in a military capacity unless the duties for both roles are identical. However, due to the way some program personnel have interpreted the 24/7 response capability requirement, this policy may conflict with another DOD policy that designates SARCs and VAs as the single point of contact for ensuring that a 24-hours-a-day, 7-days-a-week victim response capability exists. Although Army Headquarters and Army National Guard officials told us that the DOD Safe Helpline was designed to provide this 24/7 response capability, some of the Army National Guard and the Army Reserve SARCs and VAs that we interviewed interpret DOD’s requirement for a 24/7 victim response capability to mean that they are on call at all times regardless of their civilian or military duty status—especially since they are the only individuals in their respective units who have been assigned these responsibilities. In our survey, 52 out of 68 of Army National Guard respondents (76 percent) and 19 out of 27 of Army Reserve respondents (70 percent) said that they continued to perform their SARC or VA duties during drill weekends, while the remaining respondents noted that they identified other individuals to perform those duties. Compensatory time for after-hours work: A law mandating that military technicians receive compensatory time for after-hours work and general expectations about what constitutes a “typical” work day have raised concerns related to the requirement that SARCs and VAs have around-the-clock availability. During a site visit, one Guard SARC told us that the SARC could easily claim comp time each week, but did not feel right claiming the hours. Another Army Reserve SARC we interviewed told us that the command had denied the SARC’s request for compensatory time despite having spent hours beyond the normal workday on the phone assisting victims. In our survey, 65 out of 68 Army National Guard respondents (96 percent) and 100 percent of Army Reserve respondents said that they accept calls about sexual assault incidents “at any time.” In May 2016, the Director of the Army National Guard issued guidance to its personnel directing that the DOD Safe Helpline be used after regular duty hours. However, Guard officials could not provide any information about the extent to which this guidance has affected the amount of after-hours work performed by its SARCs and VAs. The National Defense Authorization Act for Fiscal Year 2016 directed the Secretary of Defense to convert no fewer than 20 percent of dual-status military technician positions identified as general administration, clerical, finance, and office service occupations as of January 1, 2017 to civilian positions. Department of the Army, Army National Guard, and Army Reserve officials acknowledged that using civilians to fill full-time SARC and VA positions would help to mitigate the challenges posed by using military technicians. Army Reserve officials said that they had proposed converting their full-time SARC and VA positions to civilian positions; as of September 2016, they told us that they would be converting their 35 full-time SARC positions to civilian positions, but that conversion of their full-time VA positions had not been approved at that time. Army National Guard officials stated that they were still considering whether they would convert any of their full-time SARC and VA positions. Staffing-related issues, such as those previously identified, have persisted in part because National Guard and Army Reserve leadership have not evaluated how their use of full-time and collateral-duty staff affects program implementation, and Department of the Army leadership has not evaluated how staff utilization affects the ability of their active and reserve components to achieve program objectives. To help agencies run efficient and effective operations, Standards for Internal Control in the Federal Government states that management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. Further, the standards emphasize that management should periodically evaluate the organizational structure to ensure that it meets the entity’s objectives and has adapted to any new objectives for the entity, such as a new law or regulation. With additional authorizations unlikely, the combined number of full-time SARCs and VAs in the active and reserve components coupled with the general proximity of some active-duty and reserve forces presents a possible opportunity to leverage the collective capabilities of the Army’s active and reserve components. For example, some officials with the Department of the Army, the Army National Guard, and the Army Reserve supported the idea of sharing the Army’s active and reserve-component resources to establish a regional support structure for the program. In addition, a SHARP official from Army Forces Command stated that staffing program personnel on a regional basis would align well with the regional staffing approach that the Department of the Army has successfully used to provide other victim services, such as its special victims counsel and criminal investigators. However, we also met with Army National Guard and Army Reserve officials who were opposed to the concept of a regional approach. For example, some commanders in the Army National Guard and Army Reserve expressed concerns about who would be responsible for supervising SHARP personnel and ensuring accountability for performance and response times, while the chief of staff of a Reserve command commented that the idea needed further analysis. Officials with the Department of the Army said that active-component SHARP personnel may provide support to Army National Guard and Army Reserve personnel and that sexual assault victims may seek assistance from any SARCs or VAs, regardless of their duty status or service affiliation. However, Army National Guard and Army Reserve officials said that if a reserve-component soldier goes to an active-duty SARC for assistance, that SARC will often contact a Guard or Reserve SARC to provide help to the victim. After the August 2015 SHARP Program Improvement Forum, the Department of the Army SHARP Program Office directed its staff to comprehensively evaluate the structure used to staff full-time SHARP program personnel. In September 2016, officials from the Department of the Army stated that they plan to consider the full-time staffing structure employed by the National Guard and the Army Reserve in this evaluation. However, the only documentation provided by the Army about their planned evaluation was a draft SHARP Campaign Plan, which referred to assessing the current manning levels and caseload rate, and stated that the assessment would consider staffing requirements, whether full-time staff should be civilian or military, and staff turnover. Neither Army officials nor the draft Campaign Plan provided details about what the assessment would evaluate, such as the allocations of full-time and collateral-duty personnel and the use of military technicians versus civilians for the positions, or when the assessment would be conducted or completed. Without an evaluation that assesses how staffing levels, staff allocation and utilization, and the types of positions used for full-time and collateral-duty staff affect the achievement of SHARP program objectives across all Army components, the Department of the Army may be missing opportunities to achieve efficiencies within current authorization levels and will not have the information necessary to comprehensively develop and support future resource requests. The Army National Guard has developed budget guidance but has not effectively communicated this guidance to its full-time program staff, and the Army Reserve has not developed or distributed such guidance to the full-time program staff at its subordinate commands. Further, while the Department of the Army’s SHARP office has taken some steps to improve oversight of how SHARP program funds are used, its efforts are focused on the general execution of funds and do not provide visibility over the National Guard’s and the Army Reserve’s use of program funds at the state and command level. SARCs in the Army National Guard and the Army Reserve serve as program managers who oversee implementation and execution of the SHARP program, which National Guard and Army Reserve officials told us includes the responsibility for annually submitting budgets and accounting for program expenditures. Standards for Internal Control in the Federal Government state that management should internally communicate the necessary quality information to achieve the program’s objectives. Further, management must clearly communicate authorizations for proper execution of transactions to ensure only valid transactions to use or commit resources are initiated or entered into. For fiscal year 2015, the Army National Guard issued funding guidance that specifically identified the types of training and other materials that could be purchased with SHARP funds. However, this guidance was not communicated or disseminated in its entirety to National Guard SHARP personnel by the National Guard SHARP program office. Instead, in 2015, the Army National Guard’s SHARP office distributed a summary of the guidance to its SARCs in the form of a two-page e-mail that consisted of a general solicitation of funding requests; a paragraph that described marketing, outreach, and administrative resources that could be purchased with program funds; and another paragraph that addressed personnel training expenditures. As a result, Army National Guard SARCs and VAs expressed uncertainty in response to our survey about what would qualify as an authorized use of program funds. Specifically, 19 out of 30 Army National Guard respondents (63 percent) who provided supplemental written responses to our survey question about whether any additional funding guidance would be useful indicated a desire for additional guidance about how to spend their funds, such as whether spending on things such as conferences, training, and promotional items were permissible. A summary of the Army National Guard’s funding guidance was given to SHARP personnel, but a copy of the actual guidance was not provided. Without communication and dissemination of the National Guard Bureau’s funding guidance by the National Guard SHARP program office, SHARP personnel in the Army National Guard may not have the necessary information to develop their budgets and to help ensure the efficient and effective use of program funds. We also found that the Army Reserve Command SHARP program office has not developed or communicated budget guidance for SARCs to use in preparing annual budget requests or submitting expenses for approval. For example, SARCs and other officials at the Army Reserve installations that we visited told us that they did not receive budget guidance in 2015 from the Army Reserve Command’s SHARP office. Specifically, Army G-1 officials we spoke with during a site visit to an Army Reserve major subordinate command stated they receive no funding guidance for the SHARP program from Army Reserve Command, and from their perspective there is no formal budget process. Similarly, during our visit to another Army Reserve major command, the full-time SARC told us that the SARC called the Army Reserve Command’s SHARP office three times to ask for budget guidance, and that the guidance the individual finally received was unclear and consisted of a spreadsheet and a due date. Another full-time SARC at a different Army Reserve major command told us that the SARC’s command would not allow program funds to be spent without a specific authorization, and that the absence of funding guidance from Army Reserve Command meant that the SARC does not have the information needed to fill out these authorizations and was thus limited in his or her ability to purchase promotional items and plan activities. The concerns expressed during our site visits were further corroborated by responses to our survey, with 15 out of 27 Army Reserve respondents (56 percent) indicating that they had never received any guidance from Army Reserve Command about how to spend SHARP program funds. Officials from the Army Reserve Command’s SHARP office said they have provided budget guidance many times and that they were unaware of a unit that asked for, but did not receive, requested guidance. Specifically, officials stated that specific budget training was presented by its Budget Integration Office at the Annual SHARP training held in March 2015, and then again at the training provided in September 2016. However, while the March 2015 training provided an overview of the funding process and preparing a budget, the materials from that training that we reviewed did not provide any information or other guidance about what specific items should be included in their annual budget requests or what would qualify as an authorized use of program funds. Without the Army Reserve Command SHARP office developing and communicating guidance that provides clear information about what to include in budget requests, SHARP personnel in the Army Reserve will not have the necessary information to develop their budgets and to help ensure the efficient and effective use of program funds. In addition to issues with budget guidance, we found that the Department of the Army’s SHARP office has limited visibility over the use of SHARP program funds by the states and territories in the National Guard and by the Army Reserve commands. According to Army Regulation 600-20, the Director of the Army’s Sexual Assault Prevention and Response Program is responsible for the program’s management functions. Moreover, Standards for Internal Control in the Federal Government state that management should design control activities to provide, among other things, accountability for resources, which can be done by periodically comparing resources with the recorded accountability to help reduce the risk of errors, fraud, misuse, or unauthorized alteration; and should ensure that only valid transactions to use or commit resources are initiated or entered into. Officials from the Department of the Army’s SHARP office stated that they oversee the general execution of the Army National Guard’s and the Army Reserve’s SHARP program budgets. These officials also stated that in fiscal year 2016, they began conducting midyear reviews to provide additional oversight of SHARP funding execution in the Army National Guard and the Army Reserve by comparing their spending plans to the Army’s long-term budget plan, also known as the Program Objective Memorandum. However, they said that these reviews are focused on the general execution of program funds by the Army National Guard and the Army Reserve and do not provide the Department of the Army’s SHARP office with visibility over expenditures at the Army National Guard state or Army Reserve major command level. As a result, the Army SHARP program office does not know the extent that SHARP program funds provided to the states and commands are actually being spent on the SHARP program, or the extent that SHARP funds may have been moved by commanders to other areas of need. To help address this concern, Army National Guard SHARP officials said that in fiscal year 2016, they started requesting monthly execution reports from the states to provide additional visibility over how SHARP funds are being used. Similarly, officials from the Army Reserve SHARP program office said that every month they review the difference in the command’s funding allocations and execution, and will contact commands to discuss any deviations as needed. While these are positive steps that may facilitate increased oversight, this level of information had not yet been included in the scope of the midyear reviews conducted by the Department of the Army. Until the scope of the midyear reviews is expanded to facilitate increased oversight of specific SHARP program expenditures in the Army National Guard and the Army Reserve at the state and command level, the Department of the Army will be limited in its ability to make informed budget decisions and to help ensure the appropriate use of program funds. The organization responsible for investigating a sexual assault incident involving a member of the Army reserve components varies depending on the circumstances of the situation, and the timeliness of some investigations involving Army National Guard members can pose a challenge. Investigative authority of sexual assaults involving a member of the reserve component is determined by the victim’s or accused’s duty status and location at the time of the incident and may be assigned to a military criminal investigative organization, civilian law enforcement organization, or, in certain incidents involving an Army National Guard member, the National Guard’s Office of Complex Administrative Investigations (OCI). Table 1 summarizes some factors that generally determine which organization has the authority to investigate sexual assault incidents that involve a member of the Army National Guard or the Army Reserve. While there are general guidelines for determining investigative authority, there are various circumstances that may affect the extent to which such incidents are investigated. For example, Army National Guard and Army Reserve officials explained that the specific crimes enumerated under the category of “sexual assault” in the Uniform Code of Military Justice in some cases differ from what is classified as a sexual assault under state and local laws. Further, the National Guard Bureau has reported that the military’s definition for sexual assaults may be more stringent than state statutes, resulting in reports that may not be fully investigated by civilian law-enforcement organizations or in situations where civilian authorities have declined to prosecute. The National Guard Bureau has also reported that the lack of a unifying code of military justice applicable to all states is a particular challenge, because there can be considerable variance among the different state codes of military justice as well as state criminal statutes that may be applicable for members of the Army National Guard. While these variations are consistent with existing law, Army National Guard and Army Reserve officials stated that this can send a mixed message to soldiers, because reserve component members involved in a sexual assault incident may not receive the criminal investigation that their active-duty counterparts routinely receive for the same or similar offense. To help address the instances when Guard members are not subject to Uniform Code of Military Justice jurisdiction, in 2012, the National Guard Bureau established its Office of Complex Administrative Investigations (OCI) to conduct administrative investigations of sexual assault incidents involving Guard members that were declined a criminal investigation by civilian law-enforcement organizations. In 2014, OCI’s mission was refined to include the investigation of sexual assault cases that occur within the states but were not investigated by a military criminal investigative organization due to the lack of jurisdiction or when it is determined that the civilian law enforcement agency with jurisdiction did not process a case sufficiently. OCI officials stated that they use DOD’s definition of sexual assault to conduct their investigations and substantiate or unsubstantiate an allegation of sexual assault based on the evidence collected, and they then refer the case to the subject’s commander for appropriate action. While OCI helps to fill a gap in investigating sexual assault cases involving National Guard members, timely investigations are a challenge that may affect the extent to which OCI is used to conduct investigations. According to National Guard guidance, OCI investigations should typically be completed in 3 weeks. However, OCI investigation data show that of the 79 investigations it conducted in fiscal year 2015, 57 percent or 45 cases took 6 to 9 months from the time a case was referred until when the investigation was completed, and 39 percent or 31 cases took 3 to 6 months to complete. In contrast, we analyzed timeliness data on investigations conducted by CID—the Army’s military criminal investigative organization—that were recorded in DOD’s sexual assault incident database and found that, for sexual assault cases investigated in fiscal year 2015, 81 percent of the cases with Army National Guard victims (68 out of 84) and 48 percent of the cases with Army Reserve victims (45 out of 93) were completed within 3 months of receiving the request for an investigation. According to OCI and CID officials, a timely investigation is important because it becomes increasingly more difficult to gather useful evidence the farther out an investigation is conducted from when the incident occurred. During one of our site visits, we met with the state’s Adjutant General who called OCI’s investigation delays “unconscionable”, and as a result, said that they prefer to work with civilian law-enforcement organizations instead. However, the Adjutant General acknowledged that civilian law-enforcement organizations may decline to investigate, in which case an OCI investigation is the only option. Figure 3 shows more detailed information about OCI investigation time frames compared to CID investigation time frames for fiscal years 2013 through 2015. While OCI officials did not comment on the reasonableness of the typical length of an investigation, they did express concern with the timeliness of OCI’s investigations, and explained that the lengthy investigations are the result of OCI not having enough full-time personnel to meet the current demand for investigations. To assist in alleviating the issue, OCI officials told us that OCI received additional funding from the National Guard Bureau to increase the number of full-time trained investigators from 12 in fiscal year 2015 to 18 in fiscal year 2016. While the number of investigators has increased, OCI officials stated that they are only authorized to place investigators on short-term active-duty orders for 1 year, which has resulted in constant turnover and has further exacerbated delays because new personnel have to be trained each year. Army National Guard and OCI officials also stated that a 2014 manpower study conducted by the Army Manpower Analysis Agency validated OCI’s need for five Army civilian investigator positions, which would provide greater continuity in the office since those positions could be filled with personnel that could serve longer than the 1-year limit that the military personnel have been subject to. However, the officials added that funding was not approved to fill the civilian positions. Since the 2014 study, OCI’s caseload has increased. Specifically, referrals of cases to OCI for investigation have consistently increased since the office was established—starting with 3 referrals in fiscal year 2012, 20 referrals in 2013, 35 referrals in 2014, and more than doubling to 80 referrals in 2015. Based on the caseload increase since 2014, one senior OCI official estimated that 22 investigators are needed to help with the current backlog of cases. However, the official added that future funding for OCI staffing is in question largely because the Department of the Army did not validate OCI’s requirement for any personnel in its most recent long-term budget plan, the Program Objective Memorandum for 2017–2021. To help agencies run efficient and effective operations, Standards for Internal Control in the Federal Government states that management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. Further, management periodically evaluates the organizational structure so that it meets the entity’s objectives. In addition, the Department of the Army’s personnel regulation specifies that the Chief of the National Guard Bureau will establish the requisite staff positions and make resources available to adequately implement program requirements, among other things. However, the Army and National Guard Bureau have not reassessed OCI’s resources and timeliness since 2014 to take into account OCI’s growing caseload and to determine how to improve the timeliness of sexual assault investigations in light of the increased number of requests for investigations conducted by OCI. Until the Army and the National Guard Bureau reassess OCI’s resources and timeliness to determine how to conduct sexual assault investigations more quickly and to identify the resources needed to improve the timeliness of investigations, Army National Guard victims may continue to experience lengthy investigations and the ability to gather usable evidence will be increasingly more difficult. The availability of medical and mental health services paid for or provided by DOD varies based on a National Guard or Reserve victim’s duty status at the time of an assault. In addition, sexual assault victims serving in the Army National Guard or in the Army Reserve must go through a process to determine whether they are eligible for any follow-up or long- term medical and mental health care related to the assault that is provided by or paid for by DOD; the Army National Guard established an expedited process to make this determination, but the Army Reserve has not, which can delay a soldier’s access to services. Immediate emergency medical care is available at DOD or civilian health- care facilities—free of charge—to victims of sexual assault serving in the Army National Guard and the Army Reserve, regardless of duty status at the time of the assault. However, their eligibility for follow-up or long-term medical and mental health-care services that are paid for or provided by DOD varies based on the victim’s duty status at the time of the assault. Under DOD guidance, members of the reserve components, whether they file a restricted or unrestricted report, shall have access to medical treatment and counseling for injuries and illness incurred from a sexual assault inflicted upon a servicemember when serving in an “eligible duty status,” including active service and inactive duty training. For example, a member of the Army National Guard or the Army Reserve who is sexually assaulted while in an active-duty status, such as during the 2- week annual training period, would be eligible for treatment at a military medical treatment facility or for care that is paid for by DOD. For reserve-component members who are sexually assaulted while serving in an “ineligible duty status,” DOD guidance specifies that they may receive advocacy services from a SARC or VA employed by the department, appropriate non-medical referrals, and a forensic medical exam at no cost, in accordance with statutory requirements. However, members who are assaulted while serving in an “ineligible duty status” are not eligible for medical or mental health-care services that are paid for or provided by DOD. For example, a member of the Army National Guard or the Army Reserve who is sexually assaulted while in a civilian status or an Army National Guard member who is assaulted while in a state active- duty status would not be eligible for follow-up or long-term treatment at a military medical treatment facility or for care that is paid for by DOD. Rather, officials with the Army National Guard and the Army Reserve told us that under these circumstances, a sexual assault victim would be referred to state and local community resources to receive care. In addition to duty status, there are other factors that may affect a reserve-component member’s ability to obtain medical or mental health- care services following a sexual assault, such as the availability of care in rural areas, quality of care, and affordability of care. Availability of care in rural areas: As noted above, state and local community medical and mental health resources are the primary treatment options for reserve-component members who are sexually assaulted while serving in an ineligible duty status, and are also options for care paid for by DOD if a reserve component member was assaulted in an eligible duty status. However, the availability of such resources for sexual assault victims can vary depending on the geographic location of where the victim lives. Officials from the Army National Guard, the Army Reserve, and SAPRO told us that some reserve-component members who live in rural or remote areas may have difficulty finding available resources. For example, an Army National Guard SARC told us that soldiers who live in very rural areas of the state might have a 3 to 4 hour drive to reach a medical facility that can conduct a sexual assault forensic examination. In our survey of full-time SARCs and VAs in the Army National Guard and the Army Reserve, respondents reported varying degrees of challenges in finding geographically accessible medical and mental health care for sexual assault victims in their state or command. Specifically, 10 out of 66 Army National Guard respondents (15 percent) and 10 out of 26 Army Reserve respondents (39 percent) reported that it was extremely or moderately challenging to find geographically accessible medical care. Further, 13 out of 66 Army National Guard respondents (20 percent) and 13 out of 26 Army Reserve respondents (50 percent) reported that it was extremely or moderately challenging to find geographically accessible mental health care. In March 2016, we identified similar challenges in our report on the availability of certified sexual assault forensic examiners—noting that there were few or in some cases no examiners available in rural areas, as well as a limited availability of examiners in some urban areas. Quality of care: Some Army National Guard and Army Reserve survey respondents highlighted concerns with the quality of medical and mental health care services available to sexual assault victims in their state or command. Specifically, 11 out of 66 Army National Guard respondents (17 percent) and 2 out of 27 Army Reserve respondents (7 percent) reported that they were aware of victim complaints about the quality of medical care; and 12 out of 66 Army National Guard respondents (18 percent) and 3 out of 26 Army Reserve respondents (11 percent) were aware of victim complaints about the quality of mental health care. SAPRO officials told us that they have explored trying to address the availability and quality of care by using telemedicine, where a care provider communicates electronically with the victim through a computer or tablet, which would expand the choice of potential providers. However, they said that state licensure laws prohibit DOD from delivering telemedicine to a servicemember when he or she is in a civilian status, or is not eligible for care in a federal facility. Affordability of care: As previously noted, DOD does not pay for or provide medical or mental health care to reserve-component soldiers who are sexually assaulted while serving in an ineligible duty status, which Army National Guard and Army Reserve officials stated raises concerns about the affordability of care for these soldiers, particularly for those who do not have health insurance. To help address concerns about the affordability of care, Army National Guard and Army Reserve officials developed a proposal for a pilot program through the Army Family Action Plan process that would provide its members with vouchers for treatment for duty-limiting mental health conditions regardless of duty status. According to Army National Guard officials, these vouchers could potentially be used for counseling related to a sexual assault, as well as other issues such as suicide prevention or substance abuse. The officials said that in April 2016, the Army Family Action Plan tabled this proposal until fall 2016, due in part to concerns that it could conflict with insurance requirements under the Affordable Care Act. However, Army National Guard and Army Reserve officials explained that even if a reserve member has health insurance, the out-of-pocket or copayment expenses can still be significant enough to inhibit a sexual assault victim from getting care. Army National Guard officials and members of the Joint Psychological Staff met in July 2016 with leaders of the Psychological Health Multi-Disciplinary Working Group to discuss issues such as funding for the pilot program and voucher eligibility. The group decided to present the pilot proposal at the October 2016 Reserve Psychological Health Council Meeting to further develop support from other military services. They said that they also plan to pursue the pilot through a federal legislative change proposal for fiscal year 2019. The Army’s process for determining a reserve-component soldier’s eligibility for follow-up or long-term medical and mental health care can negatively affect Army Reserve soldiers’ access to care because the process in the Army Reserve is lengthy, and currently does not enable Army Reserve victims to receive medical care if they choose to keep a sexual assault incident confidential. The Army National Guard has established an expedited process to determine whether sexual assault victims are eligible for medical and mental health care that is provided by or paid for by DOD, but the Army Reserve has not, potentially resulting in a delay to a soldier’s access to services. Specifically, DOD’s instruction specifies that National Guard and Army Reserve personnel who are assaulted while serving in an eligible duty status, such as during inactive duty training, are eligible for medical and mental health services that are either provided or funded by DOD. However, before DOD will pay for or provide any follow-up or long-term medical and mental health care for sexual assault victims serving in the Army National Guard or the Army Reserve, DOD requires that a line-of-duty determination be made to establish whether the incident was service-connected (i.e., occurred in the line of duty). According to Army Reserve officials, the line-of-duty determination process can be lengthy and consequently can delay a soldier’s access to certain long-term mental and medical health-care services. DOD’s sexual assault prevention and response instruction requires the commander of the Army Reserve Command and the Director of the Army National Guard to designate individuals to process line-of-duty determinations for victims of sexual assault. The instruction also provides that line-of-duty determination requests for sexual assault cases that meet certain criteria must be decided within 30 days from the date of the request. To meet this requirement, given the lengthy nature of the determination process, the National Guard established an expedited line- of-duty investigation process for sexual assault victims, which Army National Guard officials said generally enables them to make a determination within 72 hours of when a request is made. Army National Guard officials explained that for this process, the Army National Guard developed an automated, secure web-based system that is accessible to the Joint Force Headquarters SARC. The SARC inputs all line-of-duty determination data into this system, and the determination is then reviewed and approved as appropriate by the Army National Guard SHARP Office. However, the Army Reserve does not have an expedited version of the line-of-duty determination process. Army Reserve officials told us that they have no documented average or baseline timeframe for getting a line-of-duty determination approved, and that the timeframe varies based on the commander, the local facilities, and the needs of victim, among other things. They explained that the determinations depend heavily on the commander’s knowledge of the process, but acknowledged that the Army Reserve SHARP program office should do a better job in assuring that these requests are expedited. In July 2015, we reported on the time it can take to complete the determination process—noting that more than three-fourths of all Army Reserve line-of-duty investigations, including those not related to sexual assault incidents, were overdue. We found that for the Army Reserve, 82 percent of formal investigations took longer than the required 75 days, and 80 percent of informal investigations took longer than the required 40 days. In our survey, 15 out of 27 Army Reserve respondents (56 percent) indicated that completing the line-of- duty process was “extremely or moderately challenging”, and 11 out of 27 Army Reserve respondents (41 percent) identified that ensuring victims get care while waiting for a duty determination to be “extremely or moderately challenging.” Figure 4 further details survey responses from Army Reserve SARCs and VAs on challenges associated with the line-of- duty determination process. During a site visit to an Army Reserve command, we met with Army medical personnel who told us that because of the lengthy line-of-duty process, it was possible that a victim in the Army Reserve who received care at a DOD facility may be billed for his or her care before a determination of duty status is made. Reserve officials also stated that the primary issue associated with the line-of-duty process is that medical costs for victims are not paid up front by DOD. As a result, victims will either have to pay out of pocket, use their civilian health insurance if they have any, or let the bills go into collection while they are waiting for the line-of-duty determination to be approved. As of September 2016, Army Reserve officials said that they plan to include an expedited process for line-of-duty determinations for Army Reserve sexual assault victims in the Army Reserve chapter of the new Army SHARP regulation that is currently being drafted. However, Army Reserve officials did not elaborate on the details about the planned process or provide any documentation about how this process would be implemented for the Army Reserve. In addition, they told us that they continue to coordinate with Human Resources Command, the Department of the Army SHARP office, and the Department of the Army G1 medical policy office to consider methods to process line-of-duty requests for Army Reserve victims that would allow the same access to care and benefits as an active component victim. Without an expedited line-of-duty determination process in the Army Reserve that provides for more timely decisions, along with a method for tracking the length of time to make the determinations so that officials have visibility over the extent that they are meeting the required time frames, sexual assault victims in the Army Reserve may continue to have to pay for their care up front, even if an assault occurred during an eligible duty status, or else face delayed access to care provided or paid for by DOD. In addition to challenges posed by the length of the determination process, Army Reserve victims who choose to keep the incident confidential by making a restricted report have not been able to receive medical or mental health care provided or paid for by DOD. The National Defense Authorization Act for Fiscal Year 2012 provides that for restricted reports, a member of the armed forces who is a victim of sexual assault may elect to confidentially disclose the details of the assault and to receive medical treatment, among other services specified in the law. In addition, DOD guidance states that line-of-duty determinations may be made without identifying the victim to the command or DOD law- enforcement organizations to enable the victim to access medical care and psychological counseling. However, Army Reserve officials explained that if an Army Reserve victim wants to file a restricted report but also wants to receive medical care covered by DOD, his or her command would need some knowledge of the case to approve the line- of-duty determination. This is because the Army’s regulation on line-of- duty determinations requires that a formal investigation be conducted by a commissioned or warrant officer who is senior in grade to the soldier being investigated, and that an informal investigation be conducted by the unit commander. Further, the Army regulation provides that a general or special court-martial convening authority for the soldier is still the final approving authority for either a formal or informal line-of-duty determination. In April 2016, DOD issued an updated version of its line-of-duty instruction, which states that line-of-duty determinations for restricted reporting of sexual assault cases require modified procedures in accordance with DOD Instruction 6495.02. The Department of the Army’s regulation has not been updated to align with DOD’s revised instruction; however, Army Reserve officials said that they were provided with new guidance in September 2016 that allowed limited health-care benefits to be provided to reserve component victims with a restricted line-of-duty determination. However, Army Reserve officials could not provide a copy of this guidance, and did not elaborate on the details about how these benefits would be provided to Reserve members, or what the restricted benefits would include. Without a modified line-of-duty determination process that enables soldiers to both file a confidential or restricted report and receive medical or mental health care paid for or provided by DOD if an assault occurred while they were in an eligible duty status, sexual assault victims in the Army Reserve may continue to have to pay for their care if they choose to file a confidential or restricted report, contrary to the provisions in the 2012 Act. The Army National Guard and the Army Reserve have implemented sexual assault prevention and response programs, but challenges with staffing, budget management, investigation timeliness, and eligibility determinations for care provided or paid for by DOD may hinder program implementation over the long term if they are not addressed. Specifically, the Army has not evaluated the use of program staff by its active and reserve components, thus limiting its ability to discern, for example, how workload disparities affect responsiveness to victims and its capacity to address such issues within current resource levels. Further, Army National Guard and Army Reserve staff will not be able to plan for and use program funds without the necessary budget guidance, and Army leadership will not be able to effectively oversee and account for program funds without greater visibility of program expenditures at the state and command level. Finally, the length of OCI investigations in the Army National Guard and care eligibility determinations in the Army Reserve may unnecessarily limit a reserve-component member’s access to the full range of services generally available to victims of sexual assault in the military. We recommend that the Secretary of Defense take the following six actions. To help ensure that program staff are being used in an effective and efficient manner, and to facilitate the consideration and identification of total force solutions for staffing sexual assault prevention and response and SHARP programs throughout the Department of the Army, direct the Secretary of the Army, in coordination with the Chiefs of the National Guard Bureau and the Army Reserve, to conduct an evaluation of staffing approaches used to administer the sexual assault prevention and response program, and consider opportunities to leverage resources across all Army components. This evaluation should include an assessment of the number and allocation of full-time and collateral-duty personnel, the fill rates for program positions, and the types of positions used. To help ensure that Army National Guard and Army Reserve program staff have the necessary information to develop their budgets and to help ensure the efficient and effective use of program funds, direct the Secretary of the Army to (1) direct the Army National Guard SHARP Program Office to communicate and disseminate its guidance on budget development and execution for the SHARP program to all full-time SHARP program personnel (2) direct the Army Reserve SHARP Program Office to develop clear guidance on budget development and execution for the SHARP program and disseminate this guidance to its full-time SHARP program personnel and (3) direct the Director of the Army SHARP Program Office to expand the scope of the midyear review to include monitoring and providing oversight of SHARP program expenditures at the Army National Guard state and Army Reserve command level. To help ensure that sexual assault crimes involving Army National Guard members are investigated in a timely manner, with a full investigation of the offense regardless of the reserve component or duty status of the victim, direct the Chief of the National Guard Bureau, in collaboration with the secretaries of the military departments as appropriate, to reassess the Office of Complex Administrative Investigation’s (OCI) timeliness and resources to determine how to improve the timeliness of processing sexual assault investigations involving members of the Army National Guard, and identify the resources needed to improve the timeliness of these investigations. To help ensure that victims of sexual assault in the Army Reserve have timely access to medical and mental health-care services without having to pay for their care upfront, if they are eligible for care paid for or provided by DOD, direct the Secretary of the Army to direct the Chief of the Army Reserve to develop and implement an expedited line-of-duty determination process for Army Reserve sexual assault victims, along with a method for tracking the length of time to make the determinations. When developing this process, the Chief should ensure that it allows soldiers who wish to file a confidential or restricted report to go through the determination process without disclosing their circumstances to the chain of command. We provided a draft of this report to DOD for review and comment. In written comments, DOD concurred with three recommendations, partially concurred with two recommendations, and did not concur with one recommendation. DOD also provided technical comments, which we incorporated as appropriate. DOD’s comments are summarized below and reprinted in their entirety in appendix V. DOD concurred with our recommendation to conduct an evaluation of staffing approaches used to administer the sexual assault prevention and response program. In addition, DOD responded to our three budget- related recommendations as a group, concurring with two and partially concurring with one. Specifically, DOD stated that it agreed with our recommendation for the Army National Guard SHARP Program Office to communicate and disseminate its guidance on budget development and execution for the SHARP program to all full-time SHARP program personnel. DOD also agreed with our recommendation that the Army Reserve SHARP Program Office develop clear guidance on budget development and execution for the SHARP program and to disseminate this guidance to its full-time SHARP program personnel. However, DOD partially concurred with our recommendation for the Army SHARP Program Office to expand the scope of its midyear review to include monitoring and providing oversight of SHARP program expenditures at the Army National Guard state and Army Reserve command level. Specifically, DOD agreed that the Army SHARP Program Office can provide additional oversight of expenditures through the addition of compliance inspections in the SHARP Organization Inspection Plan, but disagreed that it be done by expanding its midyear review— stating that such a change seemed excessive and would indicate a lack of trust in the ability of its organizations to manage and properly execute their resources. Instead, DOD stated that the Army Headquarters SHARP Program Office recommends that program managers in the Army National Guard and the Army Reserve continue to monitor individual transactions at the command level. We disagree that further monitoring would be excessive or that it would indicate a lack of trust in the components’ ability to manage and execute their resources; instead, we see this as a step that will enable the Army SHARP program office to fully execute the program management functions that it has been assigned. For example, our report credits the Army SHARP program office with overseeing the general execution of program funds by the Army National Guard and the Army Reserve. However, our report also notes that this level of monitoring does not constitute the type of control activity that is necessary to help reduce the risk of errors, fraud, and misuse. Additionally, this level of monitoring does not help to ensure that financial resources are committed to valid and appropriate efforts in support of the SHARP program. As such, we continue to believe that our recommendation for the Army SHARP program office to expand the scope of its midyear review is valid. Furthermore, DOD did not concur with our recommendation for the Secretary of the Army, in collaboration with the Chief of the National Guard Bureau, to reassess the Office of Complex Administrative Investigation’s (OCI) timeliness and resources to determine how to improve the timeliness of processing sexual assault investigations involving members of the Army National Guard, and identify the resources needed to improve the timeliness of these investigations. In its written comments, DOD stated that OCI is a National Guard Bureau organization and the administrative investigations that it conducts are outside the limited scope of authority the Secretary of the Army may exercise over the Army National Guard. As such, DOD suggested that the recommendation be redirected to have the Secretary of Defense direct the Chief, National Guard Bureau to perform this task in collaboration, as necessary, with the Secretary of the Army and Secretary of the Air Force. Further, DOD stated that the Chief of the National Guard Bureau is prepared to direct the National Guard Bureau Joint Staff, Army National Guard, and Air National Guard to analyze current OCI case load and requirements, coordinate with the Department of the Army to formally document OCI civilian and military staffing requirements necessary to timely conduct investigations, and recommend procedures to make OCI a program of record with appropriate funding and personnel levels. We agree with DOD’s suggestion to redirect the recommendation to the Chief of the National Guard Bureau and we have incorporated this change in our report, as appropriate. Furthermore, we are encouraged by the actions that DOD stated the Chief of the National Guard Bureau is prepared to take, and believe that if implemented, they would meet the intent of our recommendation. Finally, DOD partially concurred with our recommendation to develop and implement an expedited line-of-duty determination process for Army Reserve sexual assault victims, along with a method for tracking the length of time to make the determinations. DOD stated that it agrees the Army Reserve should develop and implement an expedited line of duty process, but added that doing so would not correct or mitigate the challenges of funding behavioral health care for Army Reserve soldiers, particularly those who require coverage for trauma experienced in a non- duty/non-paid status. DOD further stated that in response to this issue, a recommendation has been forwarded to the Secretary of Defense to consider directing a study into the feasibility of funding behavioral health care services for servicemembers who experience sexual assault while in a non-duty status. We recognize that an expedited line of duty process will not address challenges that reserve soldiers may encounter if an assault occurred in a non-duty status. We are encouraged by this additional action, and believe that, along with implementing an expedited line-of-duty determination process, additional efforts to try and overcome the impediments to health care for reserve members who are sexually assaulted while not in an eligible duty status could have a positive effect on readiness of the force. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of the Army, the Chief of the National Guard Bureau, and the Chief of the Army Reserve. In addition, this report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To obtain perspectives on issues regarding the Army’s sexual assault prevention and response (SAPR) program, we conducted a web-based survey of all full-time sexual assault response coordinators (SARC) and victim advocates (VA) in the Army Reserve (see app. III for the full scope and methodology for the survey). Below are the questions from the survey, and the results for the closed-ended questions. The responses to the open-ended survey questions are not reprinted to help preserve the confidentiality of the respondents. Of the 46 Army Reserve full-time SARCs and VAs who received the survey, 27 Army Reserve SARCs and VAs completed the survey, for a response rate of 59 percent. Some survey questions were not answered by all respondents; those instances are noted below for applicable survey questions. Response Dual status military technician Non-dual status military technician Active Guard or Reserve (AGR) Active Duty for Operational Support (ADOS) Other (please specify) Number of respondents reporting each frequency of sexual assault reports identity of the victim. SECTION B: SARC and VA Responsibilities 7. When do you accept calls about sexual assault incidents? (Check all that apply.) Individual responses are not included in order to preserve confidentiality of respondents. 13. In what way(s), if any, would being a military technician affect the performance of SARC or VA responsibilities? Individual responses are not included in order to preserve confidentiality of respondents. 14. In addition to your work with sexual assault victims, do you think you spend too much, about the right amount, or too little time on the following SARC or VA activities? (Check one response on each row.) Providing annual sexual assault unit refresher training Providing oversight of or assistance to collateral duty SARCs or VAs Attending sexual assault related training to maintain credentials Entering information into DSAID Other SHARP program administrative management tasks Working with SHARP and SAPR counterparts in other commands Working with civilian and community-based sexual assault victim assistance organizations Traveling time within your command to perform any of your SARC/VA duties Visiting units within your command (SHARP) program. SECTION C: Program Management and Collaboration 15. Do you record case information that is typically entered in DSAID in any additional formal or informal system (e.g., Excel spreadsheet, etc.) other than DSAID? Individual responses are not included in order to preserve confidentiality of respondents. 16. Does your command have too many, the right amount, or too few full-time SARCs or VAs for your current workload? (Check one response on each row.) Individual responses are not included in order to preserve confidentiality of respondents. 18. Do you collaborate with the SAPR personnel outside of your command or outside the Army Reserve? Individual responses are not included in order to preserve confidentiality of respondents. SECTION D: Reserve Component Service 19. Do you attend inactive duty training (IDT) while serving in the Army Reserve (or other Reserve or Guard service)? Response I continue to perform SARC or VA responsibilities Another guard or reservist performs SARC or VA responsibilities An active duty service member performs SARC or VA responsibilities A collateral duty SARC or VAC performs SARC or VA responsibilities Victims are redirected to DOD Safe Helpline Other (please specify below) try and send to higher command SARC as I have no full time VA at this time. See responses under Question 21. Identifying personnel who are willing to take on SARC or VA responsibilities as a collateral duty Having commanders complete the paperwork and conduct interviews Completing background suitability screening process Communicating with Reserve members regarding appointment process or continuing education training status Completing 80-hour training course Meeting continuing education training requirements Meeting the timeframes required for quarterly DOD credentialing boards Funding for orders and per diem for 80-hour SHARP training courses Other (please specify below) What are the other challenges? Individual responses are not included in order to preserve confidentiality of respondents. 23. Do you have too many, the right amount, or too few collateral duty SARCs or VAs for your current workload? Individual responses are not included in order to preserve confidentiality of respondents. 28. Do you and/or units in your command maintain a community resource list of providers (e.g., local rape crisis centers, hospitals and other medical facilities, law enforcement, mental health resources, etc.) for your command's region or area of responsibility? Individual responses are not included in order to preserve confidentiality of respondents. 32. In your experience, how many sexual assault victims in your command ever had difficulty obtaining a SAFE exam? Individual responses are not included in order to preserve confidentiality of respondents. 34. In general, how much of the Department of the Army training content do you use in the annual refresher SHARP training given to units in your command? Response I only use the training content provided by the Army I use some of the Army training content, but supplement the content with some Reserve specific information or other content I do not use the Army training content and instead develop my own Reserve-specific training content Other (please specify below) Individual responses are not included in order to preserve confidentiality of respondents. 35. How much of a challenge are the following aspects of the annual unit refresher SHARP training for the units in your command? 36. What suggestions, if any, do you have to improve the annual unit refresher training for Reserve members in your command? Individual responses are not included in order to preserve confidentiality of respondents. SECTION H: Challenges Related to Sexual Assault in the Army Reserve 37. How much of a challenge is it for you and your collateral duty personnel to find geographically accessible medical and mental health care for sexual assault victims in your command's region or area of responsibility? 39. IF YES TO QUESTION 38, what was the nature of the complaints you have heard about the quality of medical and mental health care for sexual assault victims in your command's region or area of responsibility? Individual responses are not included in order to preserve confidentiality of respondents. 40. How much of a challenge are the following aspects of assisting a sexual assault victim? 41. In your experience, has combining sexual assault and sexual harassment into one Army SHARP program resulted in confusion for soldiers? Individual responses are not included in order to preserve confidentiality of respondents. 43. How likely, if at all, is local or state law enforcement to do the following? 44. How likely, if at all, is the Army Criminal Investigation Command (CID) to do the following? Individual responses are not included in order to preserve confidentiality of respondents. 51. Please provide any final thoughts you have on ways to improve the efficiency and effectiveness of the SHARP program in the Army Reserve, or on how to better address the problem of sexual assault in the Army Reserve, or in the Army or DOD overall. Individual responses are not included in order to preserve confidentiality of respondents. To obtain perspectives on issues regarding the Army’s sexual assault prevention and response (SAPR) program, we conducted a web-based survey of all full-time sexual assault response coordinators (SARC) and victim advocate coordinators (VAC) in the Army National Guard (see app. III for the full scope and methodology for the survey). Below are the questions from the survey, and the results for the closed-ended questions. The responses to the open-ended survey questions are not reprinted to help preserve the confidentiality of the respondents.. Of the 92 Army National Guard full-time SARCs and VACs who received the survey, 68 Army National Guard SARCs and VACs completed the survey, for a response rate of 74 percent. Some survey questions were not answered by all respondents; those instances are noted below for applicable survey questions. Response Dual status military technician Non-Dual status military technician Active Guard or Reserve (AGR) Active Duty for Operational Support (ADOS) Other (please specify) Individual responses are not included in order to preserve confidentiality of respondents. 13. In what way(s), if any, would being a military technician affect the performance of SARC or VA responsibilities? Individual responses are not included in order to preserve confidentiality of respondents. 14. In addition to your work with sexual assault victims, do you think you spend too much, about the right amount, or too little time on the following SARC or VAC activities? (Check one response on each row.) Individual responses are not included in order to preserve confidentiality of respondents. 16. Does your command have too many, the right amount, or too few full-time JFHQ SARCs or VACs for your current workload? (Check one response on each row.) No respondents answered “no” to question 17 or provided a response here. 18. Do you ever participate in the sexual assault prevention and response advisory council (SAPRAC) with other SARCs and VACs? I am the regional rep for my region, I work closely with SAPRAC on a weekly basis. Response I continue to perform SARC or VAC responsibilities Another guard or reservist performs SARC or VAC responsibilities An active duty service member performs SARC or VAC responsibilities A collateral duty SARC or VAC performs SARC or VAC responsibilities Victims are redirected to DOD Safe Helpline Other (please specify below) The SARC is on duty while I am IDT. I continue to perform my SARC responsibilities, however, I will appoint the closest VA to provide assistance. There is an mday SARC, but if I'm at drill and she is not in IDT status, I still respond and handle the issue. See responses under Question 21. Identifying personnel who are willing to take on SARC or VA responsibilities as a collateral duty Having commanders complete the paperwork and conduct interviews Completing background suitability screening process Communicating with Reserve members regarding appointment process or continuing education training status Completing 80-hour training course Meeting continuing education training requirements Meeting the timeframes required for quarterly DOD credentialing boards Funding for orders and per diem for 80-hour SHARP training courses Other (please specify below) What are the other challenges? 23. Do you have too many, the right amount, or too few collateral duty SARCs or VAs for your current workload? Individual responses are not included in order to preserve confidentiality of respondents. 29. Do you and/or units in your state maintain a community resource list of providers (e.g., local rape crisis centers, hospitals and other medical facilities, law enforcement, mental health resources, etc.) for your state? 29a. How are the providers included in the community resource list identified and updated? Individual responses are not included in order to preserve confidentiality of respondents. 33. In your experience, how many sexual assault victims in your state ever had difficulty obtaining a SAFE exam? Number of respondents 2 4 7 25 12 15 Individual responses are not included in order to preserve confidentiality of respondents. 35. In general, how much of the Department of the Army training content do you use in the annual refresher SHARP training given to units in your state? Response I only use the training content provided by the Army I use some of the Army training content, but supplement the content with some Guard specific information or other content I do not use the Army training content and instead develop my own Guard-specific training content Other (please specify below) Individual responses are not included in order to preserve confidentiality of respondents. 36. How much of a challenge are the following aspects of the annual unit refresher SHARP training for the units in your state? 37. What suggestions, if any, do you have to improve the annual unit refresher training for Guard members in your state? Individual responses are not included in order to preserve confidentiality of respondents. SECTION H: Challenges Related to Sexual Assault in the Army National Guard 38. How much of a challenge is it for you and your collateral duty personnel to find geographically accessible medical and mental health care for sexual assault victims in your state? 40. IF YES TO QUESTION 39, what was the nature of the complaints you have heard about the quality of medical and mental health care for sexual assault victims in your state? Individual responses are not included in order to preserve confidentiality of respondents. 41. How much of a challenge are the following aspects of assisting a sexual assault victim? Individual responses are not included in order to preserve confidentiality of respondents. 44. How likely, if at all, is local or state law enforcement to do the following? Individual responses are not included in order to preserve confidentiality of respondents. 52. Please provide any final thoughts you have on ways to improve the efficiency and effectiveness of the SHARP program in the National Guard, or on how to better address the problem of sexual assault in the National Guard, or in the Army or DOD overall. Individual responses are not included in order to preserve confidentiality of respondents. To assess the extent that the Army National Guard and the Army Reserve face any implementation challenges in their programs to prevent and respond to sexual assault involving their members (objective 1), we reviewed the Department of Defense’s (DOD), the Department of the Army’s, and the Army National Guard’s sexual assault prevention and response guidance. We also interviewed headquarters-level officials with the Department of the Army, the Army National Guard, and the U.S. Army Reserve, as well as officials from DOD’s Sexual Assault Prevention and Response Office (SAPRO), and asked about areas where they had identified or experienced implementation challenges. In reviewing the guidance and in our discussions with officials, we identified challenges related to department and service-level program responsibilities pertaining to the assignment of program staff, budget development and execution, and investigations. We analyzed the guidance to assess the extent to which responsibilities for program development and implementation in the Army’s reserve components have been carried out in these areas. In our interviews with officials, we also discussed the applicability of and efforts to implement this guidance in the Army’s reserve components and about whether the unique nature of reserve- component service poses any challenges to efforts to prevent and respond to sexual assault. We compared the testimonial evidence obtained during these interviews with relevant provisions in the guidance and documents obtained to assess whether any of these provisions, or the lack thereof, were contributing factors in the challenges identified. In addition, we interviewed officials during site visits to four selected installations—two for the U.S. Army Reserve and two for the Army National Guard—on the implementation of sexual assault prevention and response programs in their respective components. We selected the locations for our site visits based on a variety of factors, such as installations having a higher number of reported sexual assaults per total number of soldiers, as well as to select installations of varying size and geographic region. In addition, for the Army National Guard, we considered the number of reported sexual assault incidents that were referred to the Office of Complex Administrative Investigations. During these visits, we met with sexual assault response coordinators (SARC), victim advocates (VA), staff judge advocates, chaplains, medical and mental health personnel, commanders, and non-commissioned officers. At two of the site visit locations, we also met with special victims’ counsel and investigators located at these sites. While we did not employ methodology that would allow us to generalize to the four installations as a whole or furthermore to all Army reserve component installations, the information we gathered at these four installations enabled us to obtain perspectives of a sample of commanders, servicemembers, and other officials who implement and provide services or support to sexual assault victims. In our discussions with noncommissioned officers at the site visits, we used a standard set of questions and asked to meet with a group of 3–10 full-time personnel, rather than putting part-time reservists on orders to come in and meet with us. We asked to meet with soldiers who worked in a mix of combat-arms and combat-support occupations, and in ranks that would be closest to or would have the most interaction with enlisted soldiers. By interviewing a small group of noncommissioned officers at each installation we visited, we were able to obtain a non- generalizable sample of servicemember perspectives on the Army’s sexual assault prevention and response, or SHARP, program and its response to sexual assaults. In addition to our site visits, we also interviewed officials in SAPRO and the Army Medical Command to obtain a more comprehensive understanding of the Army’s efforts to implement its sexual assault and prevention program in its reserve components. To assess the assignment of SHARP program staff, we requested and obtained data from the Army’s reserve components on the number, geographical dispersion, and types of personnel used to staff key program positions; the authorized end strength, or total number of soldiers, for each state in the Army National Guard and for each Army Reserve major command; and the locations of all subordinate units for each Army Reserve major command. For the Army Reserve, we received a data file with data as of April 2016 that we tabulated using SAS in order to compile the summary level information for the categories of interest, whereas for the Army National Guard we received the information as of May 2016 already tabulated for our tables. For both sources of data, we assessed the reliability of the information for our reporting objectives by (1) reviewing the data for accuracy and completeness, (2) reviewing available documentation about the data collection and management, and (3) collecting information from knowledgeable agency officials during interviews and by having them complete a questionnaire about the data. We also compared the collection and use of data with relevant DOD guidance and with the Standards for Internal Control in the Federal Government, specifically the importance of using appropriate, accurate, complete, and accessible information to help management make informed decisions. We determined that the data provided by the Army Reserve Command and the Army National Guard were sufficiently reliable for reporting on the number and types of SHARP program staff, and the number and geographical dispersion of soldiers served by assigned SHARP program staff. We also compared these results with relevant DOD and Army guidance to assess the extent to which responsibilities for establishing staff positions needed to adequately implement program requirements have been met. In addition, we compared these efforts with the standards for internal control about the importance of establishing an organizational structure and assigning responsibilities that enable an agency or department to operate in an efficient and effective manner and to achieve its objectives. In addition, we reviewed relevant documents related to budget development and oversight, and identified any corresponding actions taken and compared them with standards for internal control about the importance of communicating quality information to make informed decisions, especially as it relates to the prioritization of and accountability for funds, and providing accountability for resources and ensuring that only valid transactions to use or commit resources are initiated or entered into. Regarding the length of investigations, we obtained data from the Defense Sexual Assault Incident Database (DSAID) from the Army and from the National Guard Office of Complex Administrative Investigations on the length of time it took to investigate sexual assault incidents involving Army reserve-component victims from fiscal years 2012 through 2015, which is the period available in the DSAID database at the time of our review. For cases reported to Department of the Army personnel, we received a data file that included incident report date and date of conclusion of the investigation for sexual assault incidents between 2012 and 2015, which enabled us to calculate the length of time it took to investigate sexual assault incidents and tabulate this information for our tables. For the National Guard, we received the information already tabulated for our tables. For both sources of data, we assessed the reliability of the information for our reporting objectives by reviewing the data for accuracy and completeness, (2) reviewing available documentation about the data collection and management, and (3) collecting information from knowledgeable agency officials during interviews and by having them complete a questionnaire about the data. We also compared the collection and use of data with relevant DOD guidance and with standards for internal control in the federal government about the importance of using appropriate, accurate, complete, and accessible information to help management make informed decisions. We determined that the data provided by the Army and the Army National Guard were sufficiently reliable for describing the length of time it takes to investigate sexual assault incidents involving Army reserve component victims. However, a limitation of these data is that they only include cases with an Army Reserve or Army National Guard victim that were reported to an Army or National Guard SARC; cases that were reported to or handled by a SARC from another military service are not included in these data. To better understand and more comprehensively represent the experiences and perspectives of key program personnel across the Army’s reserve components, we administered and analyzed the results of two web-based surveys, one for each component, that solicited the perspectives on program guidance and implementation, among other things, from all full-time SARCs and VAs identified within the Army’s reserve components. We requested and were provided contact information for all full-time SARCs and VAs/VACs in the Army Reserve and Army National Guard from the components’ respective SHARP program offices. These lists included 52 Army Reserve full-time SARCs and VAs and 98 Army National Guard full-time SARCs and VACs. To develop our survey questions, we sought input from knowledgeable officials and reviewed relevant reports to identify themes and issues affecting sexual assault prevention and response efforts in the Army reserve components. Specifically, we interviewed officials from the SHARP program offices of the Army Headquarters, Army Reserve, and Army National Guard, and from the Army Medical Command for input on our survey development. We also reviewed DOD and Army reports and other research related to sexual assault and we reviewed our prior work related to DOD’s sexual assault prevention and response program. We also worked with GAO social-science survey specialists to develop our survey questionnaires, applying generally accepted survey design standards. Based on our review of information and consultation with knowledgeable officials during the development of the survey, we determined that slightly different variations of the survey were needed for the Army Reserve and National Guard in order to tailor to questions or response options to the specific components. We took steps in the development of the questionnaires, the data collection, and the data analysis to minimize any errors associated with conducting surveys, such as differences in how questions are interpreted, variations in respondents’ ability and knowledge or awareness for answering a specific question, or how responses are entered in the survey form. In addition to seeking input from officials in the Army Reserve and Army National Guard SHARP program offices on our survey questions, we also pretested the content and format of the questionnaire. This pretesting helped us to determine whether (1) the survey questions and response options were clear and unbiased, (2) the terms used were accurate and precise, (3) respondents were able to provide the information we were seeking, and (4) the questions and response options were comprehensive. We chose the pretest subjects to include three SARCs from the Army Reserve, and two Army National Guard SARCs. We conducted two pretests in person and three over the telephone. We made changes to the content and format of our final questionnaire after our discussion with the program office personnel, as well as after each of the first four pretests, based on the feedback we received. See appendix I for the Army Reserve survey questions and response tabulations for each closed question, and appendix II for the Army National Guard survey questions and response tabulations. We administered the questionnaire through a Web-based application on a secure GAO server. First we sent an e-mail announcement of the survey to 52 Army Reserve full-time SARCs and VAs and 98 Army National Guard full-time SARCs and VAs. Each SARC and VA was given a unique password and username for completing the survey online. We sent up to three follow-up e-mail messages to those who had not yet responded, followed by an additional telephone outreach attempt for the remaining nonrespondents. The questionnaire was available online for approximately 6 weeks. Although our original survey was distributed to the 52 Army Reserve individuals and 98 Army National Guard individuals, we subsequently excluded from our recipient list 6 individuals from the Army Reserve and 6 individuals from the Army National Guard, because those individuals were unavailable during the period of survey administration for reasons such as deployment, maternity leave, extended sick leave, or no longer serving in the SARC or VA position. Of the remaining 46 Army Reserve full-time SARCs and VAs and 92 Army National Guard full-time SARCs and VAs, 27 Army Reserve SARCs and VAs, and 68 Army National Guard SARCs and VAs completed the survey, for response rates of 59 percent for the Army Reserve and 74 percent for the Army National Guard. We analyzed the electronic survey-response data set using SAS. We first reviewed the data for electronic processing errors or other inconsistencies in the data, and assessed the frequencies of item nonresponse. After minor data cleaning and additional formatting, the analysis included frequency distribution of responses to each question, cross-tabulations of specific questions, and reviewing the open-ended responses to identify themes and areas of concern raised by the respondents. To identify medical and mental health-care services available to members of the Army National Guard and the Army Reserve following a sexual assault (objective 2), we reviewed relevant provisions in DOD, Department of the Army, and Veterans Health Administration guidance pertaining to medical and mental health-care services available to those serving in the Army’s reserve components following a sexual assault. We interviewed officials from SAPRO; Army Medical Command; and the SHARP Program Offices for Army Headquarters, Army Reserve Command, and the Army National Guard about the medical and mental health services that are available to sexual assault victims serving in the Army’s reserve components and the extent to which the availability of such care may be affected by a member’s duty status. Similarly, during our visits to four selected locations, we met with behavioral health or medical officials to discuss the medical and mental health-care services that are available to reserve-component members who are sexually assaulted, including any care that can be obtained through the local community. We also discussed any potential barriers that may affect the availability of and access to such care by reserve-component members, such as the line-of-duty determination process. In addition, we discussed efforts to identify local medical and mental health providers. We also interviewed a military sexual trauma coordinator at a Veterans Affairs Medical Center to better understand the services and care that are available to reserve-component members through the Department of Veterans Affairs. To identify the number of Army Reserve major commands with units located in each state and territory, we requested and obtained data from the Army Reserve Command on the number and size of each Army Reserve major command’s subordinate units, and the locations of these subordinate units. We received a data file that we tabulated using SAS in order to compile the summary-level information for the categories of interest. We assessed the reliability of the information for our reporting objectives by (1) reviewing the data for accuracy and completeness, (2) reviewing available documentation about the data collection and management, and (3) collecting information from knowledgeable agency officials during interviews and by having them complete a questionnaire about the data. We also compared the collection and use of data with relevant DOD guidance and with standards for internal control in the federal government about the importance of using appropriate, accurate, complete, and accessible information to help management make informed decisions. We determined that the data provided by the Army Reserve Command were sufficiently reliable for reporting on the number of Army Reserve major commands that had units in each U.S. state and territory. In addition, to better understand and to more comprehensively represent the perspectives of key program personnel within the Army’s reserve components, we included questions on our web-based surveys administered to full-time SARCs and VAs identified within the Army’s reserve components regarding the medical and mental health-care services for sexual assault victims. We analyzed the results of the surveys to determine how the medical and mental health services available at different installations were identified, and the availability and use of community medical and mental health-care services and resources, and to gain a fuller understanding of the extent to which these services may vary by location. Regarding sexual assault incident data, we obtained data from the Defense Sexual Assault Incident Database (DSAID) from the Army and the Army National Guard on the number and type of reported sexual assault incidents involving Army reserve-component victims from fiscal years 2012 through 2015, which is the period available in the DSAID database. For cases reported to Department of the Army personnel, we received a data file that we tabulated using SAS in order to compile the summary-level information, whereas for cases reported to National Guard personnel, we received the information already tabulated for our tables. For both sources of data, we assessed the reliability of the information for our reporting objectives by (1) reviewing the data for accuracy and completeness, (2) reviewing available documentation about the data collection and management, and (3) collecting information from knowledgeable agency officials during interviews and by having them complete a questionnaire about the data. We also compared the collection and use of data with relevant DOD guidance and with the standards for internal control for the federal government about the importance of using appropriate, accurate, complete, and accessible information to help management make informed decisions. We determined that the DSAID data provided by the Army and the Army National Guard were sufficiently reliable for describing the reported number and types of incidents and sexual assault victims. However, a limitation of these data is that they only include cases with an Army Reserve or Army National Guard victim that were reported to an Army or National Guard SARC; cases that were reported to or handled by a SARC from another military service are not included in these data. We conducted this performance audit from July 2015 to February 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Our analysis of reported sexual assault data from the Defense Sexual Assault Incident Database (DSAID) for fiscal years 2012 through 2015 shows that the number of sexual assault reports in the Army National Guard has increased each year over this period, and increased in the Army Reserve from fiscal years 2012 through 2014. As we have previously reported, the precise number of sexual assaults involving servicemembers is not possible to determine, and studies suggest that sexual assaults are generally underreported in the United States. Both active and reserve component servicemembers may report an alleged sexual assault using either the unrestricted or restricted reporting options. An unrestricted report of an alleged sexual assault incident is provided to the chain of command or a law enforcement organization for investigation. A restricted report is a confidential report of an alleged sexual assault that can be made without initiating an investigation or notifying the chain of command. In both the Army National Guard and Army Reserve, the majority of reported incidents were made as unrestricted reports, as shown in figure 5. Data for Army National Guard victims are listed in two separate groups, depending on whether an incident was reported to an Army sexual assault response coordinator (SARC) or a National Guard SARC. For the purposes of documenting sexual assault incidents in DSAID, officials from the Department of Defense’s (DOD) Sexual Assault Prevention and Response Office (SAPRO) said that DOD considers the National Guard to be a separate military service. As a result, they told us that only National Guard officials have visibility over cases entered into DSAID by a National Guard SARC—even if the victim is a member of the Army Reserve—and Army Headquarters Sexual Harassment/Assault Response and Prevention (SHARP) program officials told us that they do not have visibility over any cases involving Army National Guard soldiers that were reported to a National Guard SARC. Similarly, Army National Guard officials told us that the Army National Guard’s program office does not have visibility over cases involving Army National Guard soldiers that were reported to an Army SARC (active duty or Army Reserve). These data could differ from the data that DOD reports to Congress, because the data included in those reports do not necessarily represent the service affiliation of the victim. According to DOD’s Annual Report to Congress for fiscal year 2014, beginning with that annual report for fiscal year 2014, SAPRO has reported sexual assault case data from DSAID using the military service affiliation of the SARC handling the case. SAPRO officials told us that the service affiliation data in DSAID refers to the service affiliation of the SARC who is handling the case. In past fiscal years, in the sexual assault incident data reported in the annual reports, the service affiliation referred to the military service in which the victim served. The most commonly reported offenses from fiscal years 2012 through 2015 were rape and abusive sexual contact, as shown in figure 6. In DOD’s annual report to Congress, the Army’s enclosed fiscal year 2015 Sexual Assault Report noted that the Army views the high rate of reporting as an indicator of real progress in the Army SHARP program. The report stated that the Army believes that the increase in the number of reports of sexual assault reflected increased awareness and reporting, and did not result from an increase in the number of sexual assault incidents. It further noted that the unprecedented priority placed on sexual assault prevention and response by Army leaders appeared to have resulted in increased victim confidence. In addition to the contact named above, key contributors to this report were Kimberly A. Mayo, Assistant Director; Tracy A. Barnes; Herbert J. Bowsher; Renee S. Brown; Cynthia L. Grant; Amie M. Lesser; Amanda K. Miller; Richard S. Powelson; and Amber H. Sinclair. Military Personnel: DOD Has Processes for Operating and Managing Its Sexual Assault Incident Database. GAO-17-99. Washington, D.C.: January 10, 2017. Sexual Assault: Actions Needed to Improve DOD’s Prevention Strategy and to Help Ensure It Is Effectively Implemented. GAO-16-61. Washington, D.C.: November 4, 2015. Military Personnel: Actions Needed to Address Sexual Assaults of Male Servicemembers. GAO-15-284. Washington, D.C.: March 19, 2015. Military Personnel: DOD Needs to Take Further Actions to Prevent Sexual Assault during Initial Military Training. GAO-14-806. Washington, D.C.: September 9, 2014. Military Personnel: DOD Has Taken Steps to Meet the Health Needs of Deployed Servicewomen, but Actions Are Needed to Enhance Care for Sexual Assault Victims. GAO-13-182. Washington, D.C.: January 29, 2013. Military Personnel: Prior GAO Work on DOD’s Actions to Prevent and Respond to Sexual Assault in the Military. GAO-12-571R. Washington, D.C.: March 30, 2012. Preventing Sexual Harassment: DOD Needs Greater Leadership Commitment and an Oversight Framework. GAO-11-809. Washington, D.C.: September 21, 2011. Military Justice: Oversight and Better Collaboration Needed for Sexual Assault Investigations and Adjudications. GAO-11-579. Washington, D.C.: June 22, 2011. Military Personnel: DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs Need to Be Further Strengthened. GAO-10-405T. Washington, D.C.: February 24, 2010. Military Personnel: Additional Actions Are Needed to Strengthen DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs. GAO-10-215. Washington, D.C.: February 3, 2010. Military Personnel: Actions Needed to Strengthen Implementation and Oversight of DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs. GAO-08-1146T. Washington, D.C.: September 10, 2008. Military Personnel: DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs Face Implementation and Oversight Challenges. GAO-08-924. Washington, D.C.: August 29, 2008. Military Personnel: Preliminary Observations on DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs. GAO-08-1013T. Washington, D.C.: July 31, 2008. Military Personnel: The DOD and Coast Guard Academies Have Taken Steps to Address Incidents of Sexual Harassment and Assault, but Greater Federal Oversight Is Needed. GAO-08-296. Washington, D.C.: January 17, 2008. | Sexual assault in the Army is often discussed in terms of its incidence among active-duty forces. Sexual assault is a crime that similarly confronts the more than 550,000 members who collectively serve in the Guard and Reserve, who together reported 604 sexual assault incidents in fiscal year 2015; however, sexual assault is generally an underreported crime. Congress included a provision in statute for GAO to review sexual assault prevention and response in the Army's reserve components. This report addresses the extent to which (1) the Guard and Reserve face any challenges implementing programs to prevent and respond to sexual assault; and (2) medical and mental health-care services are available to victims in the Guard and Reserve. GAO reviewed DOD and Army policies; administered two web-based surveys; conducted site visits to four installations; and interviewed officials. The Army National Guard (Guard) and Army Reserve (Reserve) have implemented sexual assault prevention and response programs, but face challenges in areas such as staffing, budget management, and investigation timeliness that may hinder program implementation. Staffing: The Guard and the Reserve have staffed their sexual assault prevention and response programs, but their use of full-time and collateral-duty personnel has produced sizeable workload disparities. For example, the Guard allots two full-time staff to each state and territory, which provides Rhode Island—a state with about 2,000 soldiers—the same number of staff as Texas, which has about 18,600 soldiers. Similar imbalances exist in the Reserve, with one full-time staff at one command responsible for about 9,000 soldiers located in 16 different states, while the one full-time staff member at another command is responsible for 300 soldiers in 4 states. Officials said that collateral-duty personnel are used to mitigate workload disparities, but these positions are not always filled in the Guard, and the Reserve does not know the number filled. Without evaluating their staffing structures, the Army does not know the extent of such issues and their effect. Budget Management: The Guard has developed budget guidance on the use of funds but has not effectively communicated it to program staff, and the Reserve has not developed or distributed this guidance to its staff. Thus, Guard and Reserve program staff do not have information needed to develop their budget allocations and help ensure the efficient use of program funds. Investigation Timeliness: Data on Guard cases investigated by its Office of Complex Administrative Investigations (OCI) in fiscal year 2015 show that 57 percent, or 45 of 79 cases, took 6 to 9 months to complete; 39 percent, or 31 of 79 cases, took 3 to 6 months; and the remaining 4 percent (3 of 79 cases) took longer than 9 months. According to OCI officials, investigations take longer to complete because OCI does not have enough personnel to handle its growing caseload, which more than doubled from 2014 to 2015. The Army and the Guard have not reassessed OCI's resources since the increase in investigation requests to help ensure it has the staff needed to complete investigations within 3 weeks, as required by OCI guidance. Eligibility for follow-up or long-term health-care services paid for or provided by the Department of Defense (DOD) varies based on a Guard or Reserve victim's duty status at the time of an assault. Victims in the Guard and Reserve must go through a process, known as a line of duty determination, to determine their eligibility for care. The Guard has established an expedited process for making a determination within 72 hours of the process being initiated. However, the Reserve's process is lengthy, and in prior work GAO found that 80 percent of these determinations were overdue. Reserve officials said they plan to include an expedited process in the new Army regulation that is being drafted; however, Reserve officials did not provide details about the planned process or documentation about how it would be implemented. Without an expedited process to provide more timely decisions, sexual assault victims in the Reserve may continue to pay for their care up front, or else face delayed access to care. GAO is making six recommendations, including that DOD evaluate program staffing structure, communicate and develop budget guidance, assess the Guard's investigation timeliness and resources, and develop an expedited process for determining Reserve eligibility for healthcare services. DOD concurred with three recommendations partially concurred with two, and did not concur with assessing Guard investigation timeliness, stating that the Army has limited authority over OCI. GAO continues to believe that actions are needed to fully address the two recommendations, and redirected the OCI recommendation to the Guard, as recommended by DOD. |
The Navy’s UCLASS system will be the first unmanned aircraft system deployed on an aircraft carrier. Efforts to develop an unmanned combat air system for the Navy can be traced back to 2003 when DOD established a joint Navy and Air Force program called the Joint Unmanned Combat Air System (J-UCAS). This joint effort drew on knowledge that the Air Force had gained through early development of the Unmanned Combat Air Vehicle, an effort that began in the late 1990s. The J-UCAS program was canceled in late 2005. The following year, the Navy initiated the Unmanned Combat Air System Demonstration (UCAS- D) program—the immediate predecessor to UCLASS—with the intent to design, develop, integrate, test, and demonstrate the technical feasibility of operating unmanned air combat systems from an aircraft carrier. In 2013, the Navy successfully launched and landed a UCAS-D on an aircraft carrier. In total the Navy invested more than $1.4 billion in the UCAS-D program. In 2011, as UCAS-D efforts were ongoing, the Navy received approval from DOD to begin planning for the UCLASS acquisition program. In our past work examining weapon acquisition and best practices for product development, we found that leading commercial firms and successful DOD programs pursue an acquisition approach that is anchored in knowledge, whereby high levels of knowledge are demonstrated at critical junctures. Specifically, there are three critical junctures—knowledge points—in an acquisition program at which decision makers must have adequate knowledge to make large investment decisions. If the knowledge attained at each juncture does not confirm the business case on which the acquisition was originally justified, the program does not go forward. At the first knowledge point, a match must be made between the customers’ needs and the available resources—technical and engineering knowledge, time, and funding— before a system development program is started. At the second knowledge point, about midway through development, the developer must demonstrate that the system’s design is stable and that it can meet performance requirements. At the third knowledge point, the developer must show that the system can be manufactured within cost, schedule, and quality targets and that it is reliable before beginning production. The first knowledge point is the most critical point of the three. At that point programs should present their business case for review and approval, which establishes an acquisition program baseline. This baseline describes the cost, quantity, schedule, and performance goals of a program and provides a framework for effective oversight and accountability. This first knowledge point typically coincides with a substantial financial commitment. DOD’s acquisition policy and guidance encourage the use of a knowledge-based acquisition approach, in which major decision reviews are aligned with the start of key acquisition phases, including technology development, system development—referred to as engineering and manufacturing development—and production. Figure 1 aligns the knowledge points with key decision points in DOD’s acquisition process. According to DOD acquisition policy, the purpose of the technology development phase is to reduce technology risk, determine and mature the appropriate set of technologies to be integrated into a full system, and to demonstrate critical technology elements on prototypes. A system level preliminary design review is to be held during the technology development phase to inform requirements trades; improve cost estimation; and identify remaining design, integration, and manufacturing risks. The results of the preliminary design review are to be reported to decision makers at Milestone B—the decision review in DOD’s process that corresponds with knowledge point 1 and initiates system development. The purpose of system development is to develop a system or an increment of capability, complete full system integration, develop an affordable and executable manufacturing process, and demonstrate system integration, interoperability, safety, and utility, among other things. System development provides a critical opportunity for objective oversight before beginning production. At Milestone B, major defense acquisition programs are required by DOD policy to have approved requirements, an independent cost estimate, and an acquisition program baseline; begin tracking unit cost changes and report unit cost growth against Nunn-McCurdy statutory thresholds; and periodically report to Congress on the cost, schedule, and performance status of the program in Selected Acquisition Reports. At that time, major defense acquisition programs are also required by statute to present a business case analysis and certify on the basis of that analysis that the program is affordable, has reasonable lifecycle cost and schedule estimates, and that technologies have been demonstrated in a relevant environment, among other things. Taken together, these requirements form the basic oversight framework to ensure that Congress and DOD decision makers are adequately informed about the program’s cost, schedule, and performance progress. In addition, the information is valuable for identifying areas of program risk and its causes, and helps to ensure that decision makers consider the full financial commitment before initiating a new development program. Once initiated at Milestone B, major defense acquisition programs are required to measure program performance against the program’s baseline estimate. Changes to the baseline are only authorized under certain conditions, including a program restructure that is approved by the milestone decision authority, or a breach of the critical Nunn-McCurdy statutory threshold where DOD certifies continuation of the program to Congress. In fiscal year 2014, the Navy plans to commit to investing an estimated $3.7 billion to develop, produce, and field from 6 to 24 aircraft and modify 1 to 4 aircraft carriers as an initial increment of UCLASS capability— referred to as an early operational capability. The Navy plans to manage UCLASS as if it were a technology development program, although its strategy encompasses activities commensurate with a program in system development and early production. Accordingly, it is not planning to hold a Milestone B review to formally initiate a system development program— which would trigger key oversight mechanisms—until after the initial capability is fielded in fiscal year 2020. This strategy means the program will not be subject to these oversight mechanisms including an acquisition program baseline; Nunn-McCurdy unit cost growth thresholds; and periodic reporting of the program’s cost, schedule, and performance progress. This strategy will likely limit Congress’s ability to oversee this 6- year multibillion dollar program. Navy officials believe that their approach effectively utilizes the flexibility in DOD’s acquisition policy to ensure that UCLASS requirements and concept of operations are well understood and achievable before formally beginning a system development program. Yet they emphasize that by fiscal year 2020 they may have accumulated enough knowledge to allow them to bypass a formal development program and proceed directly to production at Milestone C. Figure 2 illustrates the Navy’s strategy. As indicated above, the Navy plans to award four firm fixed-price contracts in fiscal year 2013 to competing contractors to develop preliminary designs for the UCLASS air vehicle. The following year, the Navy plans to review those preliminary designs, conduct a full and open competition, and award a contract to develop and deliver the UCLASS air vehicles, effectively ending competition within the air vehicle segment. A review of the full system level preliminary design—including the air vehicle, carrier, and control segments—is scheduled for fiscal year 2015. DOD policy and best practices indicate that around this review point a program would typically be expected to hold a Milestone B review and transition from technology development to system development. Figure 3 illustrates the later point in the process in which the Navy plans to establish the UCLASS acquisition program baseline and formally initiate a development program. Although the Navy does not plan to hold a Milestone B review until 2020, if at all, it is effectively committing to system development and early production in fiscal year 2015. According to the Navy’s strategy, system development and early production activities, including system integration and air vehicle fabrication, will begin in fiscal year 2015 around the time of the system-level preliminary design review. The Navy also expects to increase annual funding for the UCLASS system from $146.7 million to $522.5 million between fiscal years 2014 and 2015. Testing to demonstrate the system’s capabilities is scheduled to take place from fiscal year 2017—scheduled first flight—through fiscal year 2020, when an early operational capability is expected to be achieved. If the program proceeds according to the Navy’s plan, by 2020, it will have completed many of the activities typically authorized by a Milestone B decision. Moreover, since enough quantities of UCLASS are expected to be delivered for operational use on one or more aircraft carriers, the strategy could also be seen as having begun early production before a Milestone C decision is held. In a March 2007 report we identified oversight challenges presented by an acquisition strategy that calls for proceeding into system development, demonstration, manufacturing, and fielding without the benefit of a Milestone B decision. A framework of laws make major defense acquisition programs accountable for their planned outcomes and cost, give decision makers a means to conduct oversight, and ensure some level of independent program review. The application of these acquisition laws is typically triggered by a program’s entry into system development. While the activities the UCLASS program plans to undertake exemplify that the program is entering into system development, these laws will not be triggered because the program is not holding a Milestone B review and formally initiating a development program. Therefore, the UCLASS program will not be accountable for establishing a program baseline or for reporting any cost growth to that baseline to DOD and Congress. The UCLASS system faces several risks related to cost, schedule, and program management that, if not addressed, could lead to additional cost and significant schedule delays for the system. The Navy recognizes that many of these risks exist and has mitigation plans in place to address them. UCLASS cost estimates are uncertain and could exceed available funding: Preliminary cost estimates completed by the Navy indicate that the development and fielding of the initial UCLASS system through fiscal year 2020 could cost between $3.7 and $5.9 billion, all of which is expected to be development funding. However, the Navy has only projected funding of $3.2 billion for the system through fiscal year 2020. The variability in the cost estimates is due largely to cost estimating ground rules and assumptions. For example, Navy officials stated that the $3.7 billion cost estimate reflects an assumed savings of 15 to 20 percent that they believe is achievable since competing contractors’ preliminary designs will be relatively mature. Navy and DOD officials we spoke with emphasized that no true sense of cost will be known until after the air vehicle segment preliminary design reviews have been completed and a single contractor has been selected. If the preliminary designs are less mature than assumed, costs could increase significantly, further exceeding budgeted resources. Source selection schedule is compressed: After the four competing contractors have completed their preliminary air vehicle designs, the Navy plans to conduct a full and open competition before awarding the air vehicle segment contract. The Navy’s strategy allows for about 8 months between the time that it issues its request for air vehicle proposals and the time it awards the contract. According to OSD officials, this type of contract award process typically takes approximately 12 months. UCLASS is dependent on development and delivery of other systems: The Navy identifies the delivery of the Common Control System software as a risk and notes that if it is delayed, alternative control system software would be needed to achieve the established deployment timeline. Using alternative software would increase integration costs and extend the testing timeline, resulting in duplicated development, integration, and testing once the common control system software is delivered. The Navy expects this risk to be mitigated over time as individual segments of the control system software are built, delivered, integrated, and tested. UCLASS is also critically dependent on the development and fielding of the Joint Precision Approach and Landing System (JPALS), which is a global positioning system-based aircraft landing system that guides the aircraft to make a safe landing on the aircraft carrier deck. However, in a March 2013 report, we found that the JPALS program has experienced significant schedule delays. Additional JPALS delays would likely affect the Navy’s UCLASS schedule, in which case the Navy may need to identify an alternative landing system for UCLASS, thus increasing the cost and delaying delivery of the capability. The Navy recognizes this risk. The program office holds weekly integrated master schedule reviews with the JPALS program and plans to mitigate risk through JPALS testing, initial deployments, and continued communication with the JPALS program and other Navy offices. UCLASS system integration will be challenging: The Navy plans to act as the lead systems integrator for all three segments through the development and fielding of the initial UCLASS system. The Navy will have three separate but interrelated segments to manage, the timing and alignment of which are crucial to success of the overall system. The system is reliant on 22 existing government systems, such as JPALS. The Navy recognizes that there is risk associated with its role as the lead systems integrator, as it does not routinely act in this capacity. Therefore, the Navy plans to manage this risk through interaction with industry and regular system level reviews. According to program officials, this integration effort will require the number of full time equivalent staff in the program office to double from its current level of 150 staff to around 300 staff. While the Navy has not yet established a business case or acquisition program baseline, the UCLASS strategy reflects aspects of a knowledge- based approach. Some of these aspects are discussed in more detail below: Leveraging significant knowledge gained from prior technology development efforts: The Navy is planning to maximize the use of technologies for carrier-based unmanned aircraft systems operations that have been developed under other efforts like the UCAS-D program, which recently demonstrated the feasibility of launching and landing an unmanned aircraft on an aircraft carrier. Navy officials note that they plan to leverage navigation and control technologies, among other things, from the demonstration program. By effectively leveraging these types of previous investments, along with other existing systems and technologies, the Navy could reduce cost and schedule for the UCLASS system and promote affordability. Incorporating an open systems design approach: We reported in July 2013 that the Navy is planning to use an open systems approach for the UCLASS system. The Navy has identified key system interfaces and, according to program officials, plans to require contractors to comply with particular open system standards, which it believes will reduce acquisition costs and simplify integration. The Navy also plans to incorporate an open systems architecture developed by OSD for the UCLASS system control segment. This architecture implements a common framework, user interfaces, software applications, and services, and is designed to be common across unmanned aircraft systems. DOD estimates that the open architecture will reduce costs and allow for rapid integration of payloads. Matching requirements to available resources: In 2012, the Joint Requirements Oversight Council issued a memorandum that required the Navy to reduce its UCLASS requirements because at that time they were deemed unaffordable. The Joint Requirements Oversight Council specifically noted that the Navy’s requirements should focus on achieving an affordable, adaptable platform that supports a wide range of missions within 3 to 6 years. As a result, the Navy scaled down the UCLASS requirements and updated its analysis of alternatives to include requirements that are more affordable and feasible. Our prior work has found that matching requirements with resources before beginning a system development program increases the likelihood that the program will meet cost and schedule objectives. Holding competition for preliminary designs: In fiscal year 2013, the Navy plans to award four firm fixed-price contracts to competing contractors to develop and deliver preliminary air vehicle designs. The Navy then plans to review those preliminary designs, conduct a full and open competition, and award a single air vehicle segment contract. The Navy believes that this competition will drive efficiencies and ultimately result in cost savings across the system’s life cycle. This strategy reflects recent DOD initiatives that emphasize the importance of competition, which we have noted in the past, can help reduce program costs. The Navy plans to manage UCLASS as a technology development program, although its strategy encompasses activities commensurate with system development and early production. The Navy believes the strategy provides considerable latitude to manage UCLASS development and to demonstrate significant knowledge before the Milestone B decision. Indeed, we have often reported that programs tend to move forward with Milestone B and system development before they have demonstrated enough knowledge. But the Navy’s plan to develop, manufacture, and field operational UCLASS systems on up to four aircraft carriers before holding a Milestone B decision would defer the decision and mechanisms that would otherwise enable oversight of these very program activities until after they are over. Without a program baseline and regular reporting on progress, it will be difficult for Congress to hold the Navy accountable for achieving UCLASS cost, schedule, and performance goals. As we have noted, these kinds of risks are present in the program and warrant such oversight. Looking ahead to fiscal year 2020, when the UCLASS system is already being delivered, Congress may have few options other than to continue authorizing funding for UCLASS manufacturing and fielding. If the UCLASS program can be executed according to the Navy’s strategy, it would be consistent with the normal DOD acquisition process that applies to most weapon system programs, with the exception of the deferral of the Milestone B review. In fact, the timing of the Milestone B review notwithstanding, the actual program activities planned are consistent with a knowledge-based acquisition approach. For example, the Navy is leveraging knowledge gained from prior technology development programs, incorporating an open systems design, matching resources with requirements, and utilizing competition. Given the competitive preliminary design process planned and subsequent competitive contract award, it seems reasonable that a Milestone B decision could be held following the competition and before the beginning of system development, providing a solid oversight framework with little or no change to the strategy’s schedule. To enhance program oversight and accountability given that the Navy does not plan to modify its acquisition strategy and hold a Milestone B decision review for the UCLASS system following the system level preliminary design review in fiscal year 2015, Congress should consider directing the Navy to hold a Milestone B review for the system after the system level preliminary design review is complete. If the Navy does not comply, Congress should consider limiting the amount of funding available for the UCLASS system until the Navy provides the basic elements of an acquisition program baseline, such as development and production cost estimates, unit costs, quantities, schedules, annual funding profiles, and key performance parameters needed for such a large investment. The Navy should also be required to periodically report the program’s status against the baseline. In order to provide for increased congressional oversight and program accountability, we recommend that the Secretary of Defense direct the Secretary of the Navy to hold a Milestone B decision review for the UCLASS system following the system level preliminary design review— which is currently scheduled in fiscal year 2015. The Navy provided us with written comments on a draft of this report. The Navy’s comments are reprinted in appendix II. The Navy also provided technical comments, which were incorporated as appropriate. The Navy did not concur with our recommendation to hold a Milestone B decision review for the UCLASS system following its planned system level preliminary design review in 2015. The Navy stated that the Under Secretary of Defense for Acquisition, Technology, and Logistics approved its UCLASS acquisition strategy in 2013 and certified that the strategy was compliant with the Weapon Systems Acquisition Reform Act of 2009, the amendments made to that Act, and DOD policy. The Navy pointed out that DOD’s policy defines the technology development phase as an “iterative process designed to assess the viability of technologies while simultaneously refining user requirements.” The Navy went on to state that the UCLASS user requirements and Concept of Operations will be refined during the early operational capability fleet exercises currently scheduled to begin in fiscal year 2020 and that, at that time, the Navy plans to request approval to hold a Milestone B review to continue development of the UCLASS capability. While the Navy’s UCLASS acquisition strategy may be compliant with laws and DOD policy, the development, production, and fielding of an operational system before holding a Milestone B review will limit congressional oversight of a significant investment in weapon system development. An estimated development cost of $3.7 billion makes this UCLASS investment larger than the majority of DOD’s current major weapon system development programs. We agree that the technology development phase of an acquisition program is intended to assess the viability of technologies while refining requirements. However, the system development and early production activities included in the Navy’s UCLASS acquisition strategy go well beyond technology development and requirements refinement, and thus warrant oversight commensurate with a major weapon system development program. Thus, we continue to believe that our recommendation is valid and are making two matters for congressional consideration to ensure Congress has information available to oversee the UCLASS system and to hold the Navy accountable for achieving UCLASS cost, schedule, and performance goals. We are sending copies of this report to the Secretary of Defense, the Secretary of the Navy, and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The National Defense Authorization Act for Fiscal Year 2012 mandated that GAO evaluate the Unmanned Carrier-Launched Airborne Surveillance and Strike (UCLASS) system acquisition strategy. This report (1) assesses the Navy’s UCLASS acquisition strategy, (2) identifies key areas of risk facing the UCLASS system, and (3) notes areas where the Navy’s strategy contains good practices. In order to assess the Navy’s UCLASS acquisition strategy, we collected, reviewed, and compared the UCLASS acquisition strategy with best practice standards for using knowledge to support key program investment decisions. These standards are based on GAO’s extensive body of work in this area. Additionally we compared the Navy’s strategy against DOD acquisition policy. In order to identify any key areas of risk facing the UCLASS system and note areas where the Navy’s strategy contains good practices, we collected and reviewed additional UCLASS documentation, such as the analysis of alternatives, capabilities development document, and other relevant Navy management documents. We discussed the Navy’s UCLASS acquisition strategy with officials from the UCLASS system program office, the Naval Air Systems Command, the Chief of Naval Operations, and organizations within the Office of the Secretary of Defense (OSD) including the Director of OSD Cost Assessment and Program Evaluation, the Deputy Assistant Secretary of Defense for Systems Engineering, and the Under Secretary of Defense for Acquisition, Technology, and Logistics. We conducted this performance audit from July 2013 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our finding based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Michael J. Sullivan, (202) 512-4841 or [email protected]. In addition to the contact named above, key contributors to this report were Travis Masters, Assistant Director; Laura Greifner; Julie Hadley; Kristine Hassinger; Laura Jezewski; Matt Lea; John Pendleton; Dr. Timothy M. Persons; and Roxanna Sun. | The Navy estimates that it will need $3.7 billion from fiscal year 2014 through fiscal year 2020 to develop and field an initial UCLASS system. The National Defense Authorization Act for Fiscal Year 2012 mandated that GAO evaluate the UCLASS system acquisition strategy. This report (1) assesses the UCLASS acquisition strategy, (2) identifies key areas of risk facing the system, and (3) notes areas where the Navy's strategy contains good practices. To do this work, GAO reviewed the Navy's acquisition strategy and compared it to DOD's acquisition policy, among other criteria; and reviewed Navy acquisition documents and spoke with Navy and Office of the Secretary of Defense officials. In fiscal year 2014, the Navy plans to commit to investing an estimated $3.7 billion to develop, build, and field from 6 to 24 aircraft as an initial increment of Unmanned Carrier-Launched Airborne Surveillance and Strike (UCLASS) capability. However, it is not planning to hold a Milestone B review--a key decision that formally initiates a system development program and triggers key oversight mechanisms--until after the initial UCLASS capability has been developed and fielded in fiscal year 2020. The Navy views UCLASS as a technology development program, although it encompasses activities commensurate with system development, including system integration and demonstration. Because the initial UCLASS system is to be developed, produced, and fielded before a Milestone B decision, Congress's ability to oversee the program and hold it accountable for meeting cost, schedule, and performance goals will likely be limited. Specifically, the program will operate outside the basic oversight framework provided by mechanisms like a formal cost and schedule baseline, statutory unit cost tracking, and regular reports to Congress on cost, schedule, and performance progress. The Navy believes its approach effectively utilizes the flexibility in the Department of Defense's (DOD) acquisition policy to gain knowledge needed to ensure a successful UCLASS system development program starting in fiscal year 2020. Yet the Navy expects to review preliminary designs, conduct a full and open competition, and award a contract for UCLASS development in fiscal year 2014, a point at which DOD policy and best practices indicate that a program would be expected to hold a Milestone B review to initiate a system development program. Apart from deferring Milestone B, the Navy's plan would be consistent with the knowledge-based acquisition process reflected in DOD policy. UCLASS faces several programmatic risks going forward. First, the UCLASS cost estimate of $3.7 billion exceeds the level of funding that the Navy expects to budget for the system through fiscal year 2020. Second, the Navy has scheduled 8 months between the time it issues its request for air vehicle design proposals and the time it awards the air vehicle contract, a process that DOD officials note typically takes 12 months to complete. Third, the UCLASS system is heavily reliant on the successful development and delivery of other systems and software, which creates additional schedule risk. Fourth, the Navy will be challenged to effectively manage and act as the lead integrator for three separate but interrelated segments--air vehicle, carrier, and control system--and 22 other government systems, such as the aircraft landing system, the timing and alignment of which are crucial to achieving the desired UCLASS capability. While the Navy recognizes many of these risks and has mitigation plans in place, they could lead to cost increases and schedule delays if not effectively addressed. The Navy's UCLASS acquisition strategy includes some good acquisition practices that reflect aspects of a knowledge-based approach. For example, the Navy is leveraging significant knowledge gained from prior technology development efforts, incorporating an open systems design approach, working to match the system's requirements with available resources, and reviewing preliminary designs for the air vehicle before conducting a competition to select a single contractor to develop and deliver the air vehicle segment. Congress should consider directing the Navy to hold a Milestone B review for the UCLASS system after the system level preliminary design review is complete.If the Navy does not comply, Congress should consider limiting the amount of funding available for the UCLASS system until an acquisition program baseline is provided. GAO included these matters for consideration because the Navy does not plan to make changes as a result of GAOs recommendation to hold a Milestone B review following the system level preliminary design reviewwhich is currently scheduled in fiscal year 2015. The Navy did not concur with the recommendation, and believes that its approved strategy is compliant with acquisition regulations and laws. GAO continues to believe that its recommendation is valid as discussed in this report. |
FMD is a highly contagious animal disease. It affects cloven-hoofed animals such as cattle, sheep, goats, and pigs, and has occurred in most countries of the world at some point during the past century. It has 7 types and over 80 subtypes. Immunity to, or vaccination for, one type of the virus does not protect animals against infection from the other types. FMD-infected animals usually develop blister-like lesions in the mouth, on the tongue and lips, on the teats, or between the hooves. They salivate excessively or become lame. Other symptoms include fever, reduced feed consumption, and miscarriages. Cattle and pigs, which are very sensitive to the virus, show disease symptoms after a short incubation period of 3 to 5 days. The incubation period in sheep is considerably longer, about 10 to 14 days, and the clinical signs of the disease are usually mild and may be masked by other diseases, thereby allowing FMD to go unnoticed. The mortality rate for young animals infected with FMD varies and depends on the species and strain of the virus; in contrast, adult animals usually recover once the disease has run its course. However, because the disease leaves them severely debilitated, meat-producing animals do not normally regain their lost weight for many months, and dairy cows seldom produce milk at their former rate. Therefore, the disease can cause severe losses in the production of meat and milk. The FMD virus is easily transmitted and spreads rapidly. Before and during the appearance of clinical signs, infected animals release the virus into the environment through respiration, milk, semen, blood, saliva, and feces. The virus may become airborne and spread quickly if pigs become infected because pigs prolifically produce and excrete large amounts of the virus into the air. Animals, people, or materials that are exposed to the virus can also spread FMD by bringing it into contact with susceptible animals. For example, the virus can spread when susceptible animals come in contact with contaminated animals; animal products, such as meat, milk, hides, skins, and manure; transport vehicles and equipment; clothes or shoes worn by people; and hay, feedstuffs, or veterinary biologics. FMD virus is the most infectious animal disease-causing virus. It has been determined that for certain strains, the dose required to infect cattle or sheep through inhalation is about 10 organisms (10 1 TCID50). Infected pigs produce immense amounts of airborne virus. An infected pig exhales 400 million organisms per day (10 8.6 TCID50). The sensitivity of cattle to infection and the high levels of airborne virus produced by infected pigs illustrate that the airborne spread of infection is another important factor in FMD outbreaks. FMD occurs throughout much of the world, and although some countries have been free of FMD for some time, its wide host range and rapid spread represent cause for international concern. After World War II, the disease was widely distributed across the globe. In 1996, endemic areas included Asia, Africa, and parts of South America. In North America, the last outbreaks of FMD for the United States, Canada, and Mexico occurred in 1929, 1952, and 1953, respectively. North America, Australia, and Japan have been free of FMD for many years. New Zealand has never had a case of FMD. Most European countries have been recognized as disease free, and countries belonging to the European Union have stopped FMD vaccination. Plum Island is a federally owned 840-acre island off the northeastern tip of Long Island, New York. Scientists working at the facility are responsible for protecting U.S. livestock against foreign animal diseases that could be accidentally or deliberately introduced into the United States. Plum Island’s research and diagnostic activities stem from its mission to protect U.S. animal industries and exports from accidental or deliberate introduction of foreign animal diseases. Plum Island’s scientists identify the pathogens that cause foreign animal diseases and work to develop vaccines to protect U.S. livestock. The primary research and diagnostic focus at Plum Island is foreign or exotic diseases that could affect livestock, including cattle, pigs, and sheep. In addition to FMD and classical swine fever, other types of livestock diseases that have been studied at Plum Island include African swine fever, rinderpest, and various pox viruses, such as sheep and goat pox. Some of the pathogens maintained at Plum Island are highly contagious; therefore, research on these pathogens is conducted in a biocontainment area that has special safety features designed to contain them. If accidentally released, these pathogens could cause catastrophic economic losses in the agricultural sector. The biocontainment area includes 40 rooms for livestock and is the only place in the United States that is equipped to permit the study of certain contagious foreign animal diseases in large animals. USDA uses this biocontainment area for basic research, for diagnostic work, and for the clinical training of veterinarians in the recognition of foreign animal diseases. DHS now shares bench space with USDA in the biocontainment area for its applied research. The North American Foot-and-Mouth Disease Vaccine Bank is also located on Plum Island. USDA was responsible for Plum Island until June 1, 2003, when provisions of the Homeland Security Act of 2002 were implemented that transferred Plum Island, including all its assets and liabilities, to DHS. This action shifted overall responsibility for Plum Island to DHS, including all the costs associated with the facility’s maintenance, operations, and security. The Act specified that USDA would continue to have access to Plum Island to conduct diagnostic and research work on foreign animal diseases, and it authorized the President to transfer funds from USDA to DHS to operate Plum Island. Plum Island is now operated as part of a broader joint strategy developed by DHS and USDA to protect against the intentional or accidental introduction of foreign animal diseases. Under the direction of DHS’s Science and Technology Directorate, the strategy for protecting livestock also includes work at DHS’s National Center for Food Protection and Defense and at its National Center for Foreign Animal and Zoonotic Disease Defense, as well as at other centers within the DHS homeland security biodefense complex. These include the National Biodefense Analysis and Countermeasures Center and the Lawrence Livermore National Laboratory. The strategy calls for building on the strengths of each agency’s assets to develop comprehensive preparedness and response capabilities. Homeland Security Presidential Directive 9 tasks the Secretary of Agriculture and the Secretary of Homeland Security to develop a plan to provide safe, secure, and state-of-the-art agriculture biocontainment laboratories for the research and development of diagnostic capabilities for foreign animal and zoonotic diseases. To partially meet these obligations, DHS has asked the Congress to appropriate funds to construct NBAF, a new facility. This facility would house high-containment laboratories able to handle the pathogens currently under investigation at PIADC, as well as other pathogens of interest. DHS selected five potential sites for NBAF in July 2007 and must prepare an environmental impact statement (EIS) for each site. According to DHS, although not included in the competitive selection process, the DHS- owned PIADC will now be considered as a potential NBAF site, and DHS will also prepare an EIS for Plum Island. (See table 1.) DHS has asked for public comment on the selection process. Following completion of the environmental impact statements and public hearings, DHS expects to choose a site by October 2008 and to open NBAF in 2014. According to DHS officials, the final construction cost will depend on the site’s location and may exceed the currently projected $451 million. Additional expenses, such as equipping the new facility and relocating existing personnel and programs, may reach $100 million. DHS has not yet determined what action to take with respect to PIADC when construction of NBAF has been completed. We found that DHS has neither conducted nor commissioned any study to determine whether FMD work can be done safely on the U.S. mainland. Instead, DHS relied on a study that USDA commissioned and a contractor conducted in May 2002 that examined a different question: whether it is technically feasible to conduct exotic disease research and diagnostics, including FMD and rinderpest, on the U.S. mainland with adequate biosafety and biosecurity to protect U.S. agriculture. This approach fails to recognize the distinction between what is technically feasible and what is possible, given the potential for human error. DHS told us that this study has allowed it to conclude that it is safe to conduct FMD work on the U.S. mainland. In addition to a number of other methodological problems with the study, we found that it was selective in what it considered in order to reach its findings. In particular, the study 1. did not assess the history of releases of FMD virus or other dangerous 2. did not address in detail the issues related to large animal work in BSL- 3 Ag facilities, and 3. was inaccurate in comparing other countries’ FMD work experience with that of the United States. A comprehensive analysis to determine if FMD work could be conducted safely on the U.S. mainland would have considered these points, at a minimum. DHS did not identify or remedy these deficiencies before using the USDA study to support its conclusions. Consequently, we believe DHS does not have evidence to conclude that FMD work can be done safely on the U.S. mainland. We found no evidence that the study examined data from past releases of FMD—particularly the release of FMD on Plum Island in 1978—or the history of internal releases at PIADC. The study did not assess the general history of accidents within biocontainment laboratories, and it did not consider the lessons that can be learned from a survey of the causes of such accidents. Such a survey would show that technology and operating procedures alone cannot ensure against a release, since human error can never be completely eliminated and since a lack of commitment to the proper maintenance of biocontainment facilities and their associated technology—as the Pirbright facility showed—can cause releases. The study panel members we interviewed said that no data on past accidents with or releases of either FMD or other pathogens was systematically presented or discussed. Rather, the panel members recalled that they relied on their own knowledge of and experience with the history of releases in a general discussion. The release of FMD virus from facilities is very rare. In fact, the incidence of the release of any dangerous pathogen from modern containment facilities is quite low. During the vast majority of the time, such facilities have been operating safely. Some releases have occurred, however. Table 2 lists known and attributed releases of FMD virus from laboratories worldwide, including those that produce vaccines. A particular deficiency in the 2002 USDA study was the omission of any explicit analysis of the release of FMD virus from Plum Island itself in 1978. In September of that year, FMD virus was found to have infected clean animals being held outside the laboratory compound in the quarantined animal supply area of PIADC. The exact route by which the virus escaped from containment and subsequently infected the animal supply was never definitely ascertained. An internal investigation concluded that the most probable routes of escape of the virus from containment were (1) faulty air balance of the incinerator area, (2) leakage through inadequately maintained air filter and vent systems, and (3) seepage of water under or through a construction barrier near the incinerator area. Animal care workers then most likely carried the disease back to the animal supply area on the island, where it infected clean animals being held for future work. (See table 3.) An analysis of the deficiencies underlying these probable routes of escape noted during the investigation show that all were related to human error and that none were related to insufficient containment technology. Any one of these deficiencies could happen in a modern facility, since they were not a function of the technology or its sophistication, procedures or their completeness, or even, primarily, the age of the facility. The deficiencies were errors in human judgment or execution and, as such, could occur today as easily as they did in 1978. In addition, a number of incidents at PIADC have resulted in internal releases such that animals within the laboratory compound inadvertently became infected, although no FMD virus was released outside the facility. These incidents show that technology sometimes fails, facilities age, and humans make mistakes. Table 4 lists known internal releases of FMD virus at PIADC since 1971. These incidents involved human error, lack of proper maintenance, equipment failure, and deviation from standard operating procedures. Many were not a function of the age of the facility or the lack of technology and could happen in any facility today. While these incidents did not directly result in any external release, they could have been useful in the 2002 study in illustrating the variety of ways in which internal controls—especially in large animal biocontainment facilities—can be compromised. Given the rarity of the release of FMD virus from laboratories, and how relevant its release is to the question of moving FMD work off its present island location, we believe that the 2002 study was remiss in not more explicitly considering this matter. In fact, members of the panel we spoke with could recall little, if any, discussion of incidents of release at Plum Island. Beyond the history of incidents at Plum Island, we found no evidence that the study considered the history of accidents in or releases from biocontainment facilities generally. Had the study considered this history, it would have shown that no facility for handling dangerous pathogens can ever be completely safe and that no technology can be totally relied on to ensure safety. The study found that “today’s technology is adequate to contain any biosafety risks at any site.” While we agree that technology— biocontainment facilities, filtration technologies, and the like—has come a long way and is a critical component of biosafety, we believe that it is inadequate by itself in containing biosafety risks. A comprehensive biosafety program involves a combination of biocontainment technology, proper procedures, and properly trained people. The study also concurred that “biosafety is only as effective as the individual who practices it.” Even with a proper biosafety program, human error can never be completely eliminated. Many experts told us that the human component accounts for the majority of accidents in high-containment laboratories. This risk persists, even in the most modern facilities and with the latest technology. The 2002 study, in fact, acknowledged this, although it did not elaborate on the critical role that people play in keeping biocontainment laboratories safe when it stated that “biosafety is only as effective as the individual who practices it.” The study’s summary conclusion that “biocontainment technology allows safe research” is, therefore, disingenuous. Finally, as we have reported previously, the maintenance of any biocontainment facility or technology plays a critical role in biosafety. For example, the lack of proper maintenance was one of the probable routes of escape in the 1978 release at Plum Island. High-containment laboratories are highly sophisticated facilities that require specialized expertise to design, construct, operate, and maintain. Because they are intended to contain dangerous microorganisms, usually in liquid or aerosol form, even minor structural defects—such as cracks in the wall, leaky pipes, or improper sealing around doors—can often have severe consequences. For example, leaking drainage pipes was determined to be the likely cause of the FMD outbreak at Pirbright in 2007. According to the experts we talked with, failure to budget for and conduct regular inspections and maintenance of biocontainment facilities is a risk to which even the most modern facilities are susceptible. All the experts we talked with, including the panel members who contributed to the 2002 study, emphasized the importance of effective maintenance and the need to protect maintenance budgets from being used for other purposes. One official told us, for example, that as his containment facility ages, he is spending more and more of his operating budget on maintenance and that, in fact, he is having to offset the rise in maintenance costs from other categories of funding within his overall budget. The 2002 study did not address in detail the issues of containment related to large animals like cattle and pigs, which present problems very different from those of laboratory animals like rats, mice, and guinea pigs. It did not address the unique risks associated with the special containment spaces required for large animals or the impact of highly concentrated virus loads on such things as the air filtration systems. Large animals cannot be kept in containers. They must be allowed sufficient space to move around in. Handling large animals within confined spaces—a full size cow can weigh up to 1,430 pounds—can present special dangers for the scientists as well as the animal handlers. Moving carcasses from contained areas to necropsy or incineration poses additional risks. For example, one of the internal releases of FMD virus at PIADC happened in transporting large animal carcasses from contained rooms through to incineration. Although it could not have been known to the study group in 2002, transferring FMD work to NBAF is to be accompanied by an increase in both scope and complexity over the current activities at PIADC. These increases in scope and complexity would mean an increase in the risk associated with work at the new facility. For example, the proposed BSL-3 Ag space at the new NBAF is projected to be almost twice the size of the space currently at PIADC and is to accommodate many more large animals. USDA’s Agricultural Research Service animal holding area requirements at PIADC specify space for 90 cattle, 154 swine, or 176 sheep (or combinations thereof). Translational studies will involve clinical trials with aerosolized FMD virus challenging groups of 30 to 45 animals and lasting 3 to 6 months. This is contrasted with about 16 large animals that PIADC can process today. Moreover, unique risks are associated with BSL-3 Ag facilities, where the facility itself is considered the primary containment area. In a standard BSL-3 laboratory, in contrast, work is done within a biological safety cabinet, which provides the primary level of containment, eliminating direct contact between the human operator and infected material. The outer parts of the facility walls thus provide a secondary barrier. Because large animals cannot be handled within a biological safety cabinet, they are free to move around in a BSL-3 Ag laboratory, where the laboratory walls provide the primary containment. An important difference between a standard BSL-3 laboratory, such as those used with human pathogens, and a BSL-3 Ag laboratory therefore is that in the latter there is extensive direct contact between the human operator and the infected animal and, consequently, the virus. Because the virus can be carried in a person’s lungs, nostrils, or other body parts, the human becomes a potential avenue by which the virus can escape the facility. Special biosafety procedures are needed—for example, a full shower upon exiting containment, accompanied by expectorating to clear the throat and blowing through the nose to clear the nasal passages. Additionally, a 5-to-7-day quarantine period is usually imposed on any person who has been within containment where FMD virus is present, a tacit acknowledgment that humans can carry the disease out with them even after these additional procedures. Although the study mentioned these matters, it gave no indication that these unique risks associated with working in large animal biocontainment facilities informed the study’s eventual findings. We also found that the study did not consider other safety issues specific to FMD. For example, the study did not look at the likely loads that air filtration systems have to deal with, especially in the case of pigs infected with FMD virus—which, through normal expiration, excrete very large amounts of virus-laden aerosols. Properly fitted and maintained high- efficiency particulate air (HEPA) filters are a key factor in all modern biocontainment facilities and have a record of being highly effective in keeping aerosolized pathogens, including viruses, contained. Nevertheless, they do not represent an absolute barrier. The typical standard for such filters is that they must operate with an efficiency of at least 99.97 percent. Often the highest level-containment laboratories use two HEPA filters in series, in addition to prefiltration systems, to gain increased efficiency. However, we found no indication that the study examined specific filtration issues with the FMD virus or that it questioned the efficiency of such systems specifically in relation to a high-volume challenge of virus, a concern that, while remote, should not have been dismissed, given the very low dose of FMD virus required for animals to become infected. The study cited the experience of three countries around the world in working with FMD—Australia, Canada, and the United Kingdom. While the study cited Australia as a foreign precedent, it noted that Australia has not conducted any FMD work on the mainland. In fact, Australia—by law—does not allow any FMD work on the mainland. In this respect, it is even more restrictive than the United States. Australia maintains a ban on live virus FMD work at all its laboratories, whether on mainland, island, or peninsula, including the laboratory at Geelong—considered by many to be the premier laboratory in the world in terms of state-of-the-art animal containment technology. Australia mitigates the risk FMD poses to its livestock by outsourcing its FMD work to other countries. The Canadian laboratory at Winnipeg was not in operation at the time of the 2002 study and is not appropriately compared to the U.S. situation. Canada has decided to conduct FMD work on the mainland. However, it is in a downtown location where there is little likelihood that susceptible animals will be in the immediate neighborhood. In addition, its scope of work for FMD is smaller than the present FMD work at the PIADC facility or the proposed facility. The proposed U.S. sites are potentially more likely to pose a risk, given their closer proximity to susceptible animal populations. The 2002 study used the U.K. Pirbright facility as an example of a precedent for allowing FMD work on the mainland. The study participants could not have known in 2002, however, that an accidental release of FMD virus at the Pirbright facility in 2007 led directly to eight separate outbreaks of FMD on farms surrounding the Pirbright laboratory. This fact highlights the risks of release from a laboratory that is in close proximity to susceptible animals and provides the best evidence in favor of an island location. Finally, the study did not consider the German and Danish situations. For example, all FMD work with large animals in Germany is restricted to Riems, an island just off the northeastern coast of Germany in the Baltic Sea. FMD work in Germany was originally restricted to the island in the1910s. During the post-World War II period, when Riems was controlled by East Germany, West Germany maintained a separate mainland facility for its FMD research, but after re-unification, Germany again decided to restrict all FMD research to Riems and disestablished the mainland facility. Construction is currently under way to expand the facility on the island at Riems. Similarly, Denmark restricts all FMD work to the National Veterinary Institute Department of Virology, on the island of Lindholm. The Danish government has recently made a further commitment to Lindholm and has rebuilt a new BSL-3 Ag laboratory exclusively for FMD work on the island. While location confers no advantage in preventing a release, location can help prevent the spread of FMD virus and a resulting disease outbreak, if there is a release. An island location can help prevent the spread of FMD virus along terrestrial routes, such as by vehicles splashed with contaminated mud or other material. An examination of the empirical evidence of past FMD releases from research facilities shows that an island location can help keep a release from becoming a more general outbreak. Another benefit of an island location is that it provides a permanent geographical barrier that may not be impregnable but that can more easily allow the Office International des Epizooties (OIE) to declare the rest of the U.S. mainland disease-free from FMD if there happened to be a release on the island. Experts we spoke with—including a number of the expert panel members from the 2002 study—agreed that an island location provides additional protection. They agreed that all other factors being equal, FMD research can be conducted more safely on an island than in a mainland location. A comparison of the releases at Plum Island in 1978 and Pirbright in 2007 provides evidence that an island location can help keep a release from becoming a more general outbreak. In September 1978, FMD virus was found to have been released from containment at PIADC. The exact route of escape was never definitely ascertained, but clean animals held on the island in the animal supply area outside the laboratory compound became infected with FMD. However, no virus was ever found off the island. In fact, when the subsequent investigation by USDA’s Animal and Plant Health Inspection Service on the mainland of Long Island found that no spread of FMD, OIE—in consideration of PIADC’s island location—continued to officially consider the United States as a whole free from FMD. This was a significant declaration that allowed the continued unrestricted export of U.S. animal products from the mainland. In summarizing the 1978 FMD virus release, the PIADC Safety Investigation Committee identified three main PIADC lines of defense that stood as barriers against the escape of disease agents: (1) the design, construction, and operation of its laboratory buildings; (2) its restrictions on the movement of personnel, materials, supplies, and equipment; and (3) the island location. This internal investigation concluded that although the first two barriers had been breached, probably by human error, the final line of defense—the island location—succeeded in containing the release from becoming a wider outbreak beyond PIADC itself. The 1978 release at Plum Island can be compared to the release at Pirbright in the summer of 2007. Pirbright is located on the mainland of Great Britain in Surrey, a semi-agricultural area just southwest of London. The U.K. Institute for Animal Health and Merial, a commercial vaccine production plant, are collocated there, and both work with FMD virus. The site is surrounded by a number of “hobby farms,” on some of which 40 to 50 cattle are bred and raised. In summer 2007, cattle on farms near the Pirbright facility became infected with FMD. Subsequent investigations concluded that the likely source of the release was a leaking drainage pipe at the facility that carried waste from the contained areas to an effluent treatment plant. The virus was then spread onto local farms by the splashing of contaminated mud onto vehicles that had unrestricted access to the contaminated area and could easily drive onto and off the site. The investigations determined that there had been a failure to properly maintain the site’s infrastructure. In all, eight separate outbreaks occurred over a 2-month period. A key difference, of course, between the Pirbright incident in 2007 and the incident at Plum Island in 1978 is that virus did not spread off the Plum Island. Similarly, escapes in 1968 in Denmark from the Lindholm facility and in the 1970s in Germany from the Riems facility, when compared to Pirbright in 2007, also demonstrate the benefit of an island location in containing a release. Since 1996, OIE has provided a procedure for officially recognizing the sanitary status of countries with regard to particular animal diseases, including FMD. A country can apply for and be granted disease-free status if it can prove that a disease is not present in the country. Ad hoc groups of international experts examine countries’ applications for official recognition of sanitary status. An elected Specialist Commission reviews the recommendations of these groups and either accepts or rejects them. If an outbreak does occur, procedures exist for countries to regain their disease-free status. This offers significant economic benefit, because export bans can exist for countries not considered disease-free. In 2002, GAO reported that an export ban on U.S. livestock products because of an FMD outbreak in the United States, similar to the 2001 outbreak in the United Kingdom, could result in losses of $6 billion to $10 billion a year while the nation eradicated the disease and regained disease-free status. Instead of revoking the U.S. disease-free status in response to the 1978 release at Plum Island, OIE continued to consider the United States as a whole free from FMD. This was because of the facility’s island location. This status from OIE allowed the United States to continue exporting animal products from the mainland after the release was identified. However, these OIE officials said that if a similar release were to occur from a facility on the U.S. mainland, OIE would most likely not be able to declare the United States disease-free. In their view, the island location provides a natural “zoning” ability that, under OIE’s rules, more easily allows the country to prove the compartmentalization that is necessary for retaining “disease-free” status. While humans cannot become infected with FMD through contact with infected animals or through eating products of diseased animals, still, FMD can have economic consequences, as recent outbreaks in the United Kingdom have demonstrated. Although estimates vary, experts agree that the economic consequences of an FMD outbreak on the U.S. mainland could be significant, especially for red meat producers whose animals would be at risk for diseases, depending on how and where such an outbreak occurred. According to a study by the U.K. National Audit Office, the direct cost of the 2001 FMD outbreak to the public sector was estimated at over $5.71 billion and the cost to the private sector was estimated at over $9.51 billion. By the time the disease was eradicated, in September 2001, more than six million animals had been slaughtered: over four million for disease control purposes and two million for welfare reasons. Compensation and other payments to farmers were expected to total nearly $2.66 billion. Direct costs of measures to deal with the epidemic, including the purchase of goods and services to eradicate the disease, were expected to amount to nearly $2.47 billion. Other public sector costs were estimated at $0.57 billion. In the private sector, agriculture and the food chain and supporting services incurred net costs of $1.14 billion. Tourism and supporting industries lost revenues eight times that level—$8.56 billion to $10.27 billion, when the movement of people in the countryside was restricted. The Treasury had estimated that the net economic effect of the outbreak was less than 0.2 percent of gross domestic product, equivalent to less than $3.8 billion. The possibility of the introduction of FMD into the United States is of concern because this country has the largest fed-cattle industry in the world, and it is the world largest producer of beef, primarily high-quality, grain-fed beef for export and domestic use. Although estimates of the losses vary, experts agree that the economic consequences of an FMD outbreak on the U.S. mainland could mean significant losses, especially for red meat producers, whose animals would be at risk for disease, depending on how and where an outbreak occurred. Current estimates of U.S. livestock inventories are 97 million cattle and calves, 7 million sheep, and 59 million hogs and pigs, all susceptible to an FMD outbreak. The total value of the cash receipts for U.S. livestock in 2007 was $141.4 billion. The total export value of red meat in 2007 was $6.4 billion. These values represent the upper bound of estimated losses. Direct costs to the government would include the costs of disease control and eradication, such as the maintenance of animal movement controls, control areas, and intensified border inspections; the destruction and disposal of infected animals; vaccines; and compensation to producers for the costs of disease containment. However, government compensation programs might not cover 100 percent of producers’ costs. As a result, direct costs would also occur for disinfection and for the value of any slaughtered animals not subject to government compensation. According to the available studies, the direct costs of controlling and eradicating a U.S. outbreak of FMD could vary significantly, depending on many factors including the extent of the outbreak and the control strategy employed. Indirect costs of an FMD outbreak would include costs affecting consumers, ancillary agricultural industries, and other sectors of the economy. For example, if large numbers of animals were destroyed as part of a control and eradication effort, then ancillary industries such as meat processing facilities and feed suppliers would be likely to lose revenue. Furthermore, an FMD outbreak could have adverse effects such as unemployment, loss of income (to the extent that government compensation would not fully reimburse producers), and decreased economic activity, which could ripple through other sectors of the economy as well. However, our analyses show that these effects would likely be local or regional and limited in scope. The economic effects of an FMD outbreak would depend on the characteristics of the outbreak and how producers, consumers, and the government responded to it. The scale of the outbreak would depend on the time elapsed before detection and the number of animals exposed, among other factors. Costs to producers of addressing the disease outbreak and taking steps to recover would similarly vary. The responses of consumers in the domestic market would depend on their perceptions of safety, as well as changes in the relative prices of substitutes for the affected meat products, as supply adjusted to the FMD disruption. In overseas markets, consumers, responses would be mediated by the actions their governments would take or not take to restrict imports from the United States. Because an overall estimate of effects depends heavily on the assumptions made about these variables, it is not possible to settle on a single economic assessment of the cost to the United States of an FMD outbreak. We have reviewed literature that considers but a few of the many possible scenarios in order to illustrate cost components and to consider the possible market reaction rather than to predict any particular outcome. DHS believes that modern technology, combined with biosafety practices, can provide for a facility’s safe operation on the U.S. mainland. Most experts we talked with believe that technology has made laboratory operations safer over the years. However, accidents, while rare, still occur because of human or technical errors. Given the non-zero risk of a release from any biocontainment facility, most of the experts we spoke with told us that an island location can provide additional protection. DHS has not conducted any studies to determine whether FMD work can be done safely on the mainland. Instead, in proposing to move FMD virus to the mainland, DHS relied on a 2002 USDA study that addressed a different question. That study does not clearly support the conclusion that FMD work can be done safely on the mainland. An island location can help prevent the spread of FMD virus along terrestrial routes, such as by vehicles splashed with contaminated mud, and may also reduce airborne transmission. Historically, the United States and other countries as well have seen the benefit of an island location, with its combination of remoteness from susceptible species and a permanent water barrier. Although FMD has no human-health implications, recent outbreaks in the United Kingdom have demonstrated its economic consequences. Estimates for the United States vary but would depend on the characteristics of the outbreak and how producers, consumers, and the government responded to it. For further information regarding this statement, please contact Nancy Kingsbury, Ph.D., at (202) 512-2700 or [email protected], or Sushil K. Sharma, Ph.D., Dr.PH, at (202) 512-3460 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. William Carrigg, Jack Melling, Penny Pickett, and Elaine Vaurio made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | DHS is proposing to move foot-and mouth disease (FMD) research from its current location at the Plum Island Animal Disease Center--located on a federally owned island off the northern tip of Long Island, New York--and potentially onto the United States mainland. FMD is the most highly infectious animal disease that is known. Nearly 100 percent of exposed animals become infected. A single outbreak of FMD on the U.S. mainland could have significant economic consequences. Concerns have been raised about moving FMD research off its island location and onto the U.S. mainland--where it would be in closer proximity to susceptible animal populations--as opposed to building a new facility on the island. GAO was asked to evaluate the evidence DHS used to support its decision that FMD work can be done safely on the U.S. mainland, whether an island location provides any additional protection over and above that provided by modern high containment laboratories on the mainland, and the economic consequences of an FMD outbreak on the U.S. mainland. In preparing this testimony, GAO interviewed officials from DHS and USDA, talked with experts in FMD and high-containment laboratories worldwide, and reviewed studies on FMD, high-containment laboratories, and the economic consequences of FMD outbreaks. GAO also visited the Plum Island Animal Disease Center and other animal biocontainment laboratories in other countries. GAO found that the Department of Homeland Security (DHS)has neither conducted nor commissioned any study to determine whether work on foot-and-mouth disease (FMD) can be done safely on the U.S. mainland. Instead, in deciding that work with FMD can be done safely on the mainland, DHS relied on a 2002 U.S. Department of Agriculture (USDA) study that addressed a different question. The study did not assess the past history of releases of FMD virus or other dangerous pathogens in the United States or elsewhere. It did not address in detail the issues of containment related to large animal work in BSL-3 Ag facilities. It was inaccurate in comparing other countries' FMD work experience with that of the United States. Therefore, GAO believes DHS does not have evidence to conclude that FMD work can be done safely on the U.S. mainland. While location, in general, confers no advantage in preventing a release, location can help prevent the spread of pathogens and, thus, a resulting disease outbreak if there is a release. Given that there is always some risk of a release from any biocontainment facility, most experts GAO spoke with said that an island location can provide additional protection. An island location can help prevent the spread of FMD virus along terrestrial routes, such as from vehicles splashed with contaminated mud, and may also reduce airborne transmission. Some other countries besides the United States have historically seen the benefit of an island location, with its remoteness from susceptible species and permanent water barriers. A recent release from the Pirbright facility--located in a farming community on the mainland of the United Kingdom--highlights the risks of a release from a laboratory that is in close proximity to the susceptible animals and provides the best evidence in favor of an island location. FMD has no health implications for humans, but it can have significant economic consequences, as recent outbreaks in the United Kingdom have demonstrated. The economic effects of an FMD outbreak in the United States, however, would depend on the characteristics of the outbreak and how producers, consumers, and the government responded to it. Although estimates vary, experts agree that the economic consequences of an FMD outbreak on the U.S. mainland could be significant, especially for red meat producers whose animals would be at risk for diseases, depending on how and where such an outbreak occurred. |
The No Child Left Behind Act of 2001 increased the federal government’s role in kindergarten-12th grade education by setting two key goals: to reach universal proficiency so that all students score at the proficient level of achievement—as defined by the states—by 2014, and to close achievement gaps between high- and low-performing students, especially those in designated groups: students who are economically disadvantaged, are members of major racial or ethnic groups, have learning disabilities, or have limited English proficiency. With these two key goals in mind, NCLBA requires states to set challenging academic content and achievement standards in reading or language arts and mathematics to determine whether school districts and schools make AYP toward meeting these standards. Education has responsibility for general oversight of the NCLBA. As part of this oversight, Education is responsible for reviewing and approving state plans for meeting AYP requirements. As we have reported, it approved all states’ plans—fully or conditionally—by June 2003. It also reviews state systems of standards and assessments to ensure they are aligned with the law’s requirements. As of April 2006, Education had approved these systems for Delaware, South Carolina, and Tennessee and was in the process of reviewing them in other states. States measure AYP using a status model that determines whether or not schools and students in designated groups meet proficiency targets on state tests 1 year at a time. To make AYP, schools must show that the percentage of students scoring at the proficient level or higher meets the state proficiency target for the school as a whole and for designated student groups, test 95 percent of all students and those in designated groups, and meet goals for an additional academic indicator (which can be chosen by each individual state for elementary and middle schools but must be the state-defined graduation rate in high schools). States generally used data from the 2001-2002 school year to set the initial percentage of students that needed to be proficient for a school to make AYP, known as a starting point, as prescribed in the NCLBA and Education’s guidance. Using these initial percentages, states then set annual proficiency targets that increase up to 100 percent by 2014. For example, for schools in a state with a starting point of 28 percent to achieve 100 percent by 2014, the percentage of students who scored at or above proficient on the state test would have to increase by 6 percentage points each year, as shown in figure 1. Setting targets for increasing proficiency through 2014 does not ensure that schools will raise student performance to these levels. Instead, the targets provide a goal, and schools that do not reach the goal will generally not make AYP. School districts with schools receiving federal funds under Title I Part A that do not make AYP for 2 or more years in a row must take action to assist students, such as offering students the opportunity to transfer to other schools or providing additional educational services like tutoring. School districts with schools that meet these criteria must set aside an amount equal to 20 percent of their Title I funds to provide these services and spend up to that amount depending on how much demand exists for these services to be provided. These schools, in consultation with their districts, are also required to implement a plan to improve their students’ achievement. The law indicates that states are expected to close achievement gaps, but does not specify annual targets to measure progress toward doing so. States thus have flexibility in the rate at which they close these gaps. To determine the extent that achievement gaps are closing, states measure the difference in the percentage of students in designated student groups and their peers that reach proficiency. Using a hypothetical example, figure 2 shows how closing achievement gaps between economically disadvantaged students and their peers would be reported. In this example, 40 percent of the school’s non-economically disadvantaged students were proficient compared with only 16 percent of disadvantaged students in 2002, a gap of 24 percentage points. To close the gap, the percentage of students in the economically disadvantaged group that reaches proficiency would have to increase at a faster rate than that of their peers. By 2014, the gap is eliminated, with both groups at 100 percent proficient. If a school misses its status model target, the law also provides a way for it to make AYP if it significantly increases the proficiency rates of student groups that do not meet the proficiency target. The law includes a provision, known as safe harbor, which allows a school to make AYP by reducing the percentage of students in designated student groups that were not proficient by 10 percent, so long as it also shows progress on another academic indicator. Safe harbor measures academic performance similar to certain growth models, according to one education researcher. For example, in a state with a status model target of 40 percent proficient, a school could make AYP under safe harbor if 63 percent of a student group were not proficient compared to 70 percent in the previous year. See figure 3. In contrast to status models that measure the percentage of students at or above proficiency in a school 1 year at a time, growth models measure change in achievement or proficiency over time. Some of these models show changes in achievement for schools and student groups using students’ average scores. Other models provide more detailed information on how individual students progress over time. Growth models can enable school officials to monitor the year-to-year changes in performance of students across many levels of achievement, including those who may be well below or well above proficiency. They may also be used to predict test scores in future years based on current and prior performance. While definitions of growth models vary, for this report, GAO defines a growth model as a model that measures changes in proficiency levels or test scores of a student, group, grade, school, or district for 2 or more years. Some definitions restrict the use of the term “growth models” to refer only to those models that measure changes for the same students over time. GAO included models in this report that track different groups of students in order to provide a broad assessment of options that may be available to states. Growth models can be designed to measure successive groups of students (for example, students in the third grade class in 2006 with students in the third grade class in 2005) or track a cohort of students over time (for example, students in the fourth grade in 2006 with the same students in the third grade in 2005). School-level growth models track changes in the percentage of students that reach proficiency or their achievement scores over time. For example, the charts in figure 4 show how two hypothetical schools measure their proficiency with a status model and with a measure of progress over time. In the case of Washington Middle School, a growth model shows a decline in performance, while a status model indicates that the school exceeded the state proficiency target of 40 percent. This school was able to make AYP even though its proficiency rate decreased. In contrast, the use of a growth model with Adams Elementary School shows that the school improved its performance, but its status model results indicate that the school did not meet the 40 percent proficiency target. That school did not make AYP, even though its proficiency rate increased. Thus, the type of model used could lead to different perspectives on how schools are performing. Individual-level growth models track changes in proficiency or achievement for individual students over time. For example, individual student growth can be measured by comparing the difference between a student’s test scores in 2 consecutive years. A student may score 300 on a test in one year and 325 on the test in the next year, resulting in an increase of 25 points. These scores could then be averaged to measure school-level results as in the previous example. Individual student growth can also be measured over more than 2 years to identify longer-terms trends in performance. Additionally, growth can be projected into the future to predict when a student may reach proficiency, and that information may be used to target interventions to students who would otherwise continue to perform below standard. Nearly all states were using or considering growth models to track performance, as of March 2006. Although NCLBA requires states to use status models to determine whether schools make AYP, the 26 states with growth models reported using them for state purposes such as identifying schools in need of extra assistance. Seventeen of these states had growth models in place prior to NCLBA. Twenty-six states reported using growth models in addition to using their status models to track the performance of schools, designated student groups, or individual students, in our survey as of March 2006 (see figure 5). Additionally, nearly all states are considering the use of growth models: 20 of 26 states that used one growth model were also considering or in the process of implementing another growth model, and 22 of 25 states that did not use growth models were considering or in the process of implementing them to provide more detailed information about school, group, or student performance. Seventeen of the 26 states using growth models reported that their models were in place before the passage of the NCLBA during the 2001-2002 school year, and the remaining 9 states implemented them after the law was passed, as shown in figure 8. Once NCLBA was enacted, states were required to develop plans to show how they would meet federal requirements for accountability as measured by whether their schools made AYP. Education approved these plans, but generally did not permit states to include growth models. According to Education officials, since NCLBA requires that states make AYP determinations on the basis of the percentage of students who are proficient at one point in time—rather than the increase or decrease in that percentage over time—growth models were considered inconsistent with the goals of the act. For example, California began using its model, called the Academic Performance Index, in the 1999-2000 school year to set yearly growth targets for schools. These targets were based on combined test scores for reading/language arts, mathematics, and other subjects. However, according to officials at the California Department of Education, California’s model, developed prior to NCLBA, was not designed to explicitly achieve the law’s key goals of universal proficiency by 2014 or closing achievement gaps. Further, a California Department of Education official explained that because the model did not report scores from reading, math, and other subjects separately, California was not approved to make AYP determinations using its model. In contrast, Massachusetts’ growth model was in place prior to NCLBA passage and then was adapted to align explicitly with the law’s key goals. Education approved Massachusetts’ AYP plan, allowing the state to use both its status model and growth model to determine AYP. Instead of using growth models to make AYP determinations, states used them for other purposes, such as rewarding effective teachers and designing intervention plans for struggling schools. For example, North Carolina used its model as a basis to decide whether teachers receive bonus money. Tennessee used its value-added model to provide information about which teachers are most effective with which student groups. In addition to predicting students’ expected scores on state tests, Tennessee’s model was used to predict scores on college admissions tests, which is helpful for students who want to pursue higher education. In addition, California used its model to identify schools eligible for a voluntary improvement program. The type of growth model used has implications for how results may be applied. California’s model provides information about the performance of its schools, enabling the state to distinguish higher-performing from lower- performing schools. However, the model does not provide information about individual teachers or students. In contrast, Tennessee’s model does provide information about specific teachers and students, allowing the state to make inferences about how effective its teachers are. While California may use its results for interventions in schools, Tennessee may use its results to target interventions to individual students. Certain growth models measure the extent that schools and students are achieving key NCLBA goals. While the use of growth models may allow states to recognize gains schools are making toward the law’s goals, it may also put students in some lower-performing schools at risk for not receiving additional federal assistance. While states developed growth models for purposes other than NCLBA, states such as Massachusetts and Tennessee have adjusted their state models to use them to meet NCLBA goals. The Massachusetts model has been used to make AYP determinations as part of the state’s accountability plan in place since 2003. This model is approved by Education in part because it complies with the key goal of universal proficiency by 2014. Tennessee submitted a new model to Education for the growth model pilot project that differs from the value-added model we describe earlier. The value-added model, developed several years prior to NCLBA, gives schools credit for students who exceeded their growth expectations. The new model gives schools credit for students projected to reach proficiency within 3 years in order to comply with the key NCLBA goal of showing that students are on track to reach proficiency by 2014. Like status models, certain growth models can measure progress in achieving key NCLBA goals of reaching universal proficiency by 2014 and closing achievement gaps. Our analysis of how models in Massachusetts and Tennessee can track progress toward the law’s two key goals is shown in table 2. Our analysis of data from selected schools in those states demonstrates how these models measure progress toward the key goals. One school in Massachusetts had a baseline score of 27.4 points in math. Its growth target for the following 2-year cycle was 12.1, requiring it to reach 39.5 points by 2004. In comparison, the state’s target using its status model was 60.8 points in 2004. The growth target was set at 12.1 because, if the school’s points increased this much in each of the state’s six cycles, the school would have 100 points by 2014. In so doing, it would reach universal proficiency in that year, as is seen in figure 9. In fact, the school scored 42.6 in 2004, thus exceeding its target of 39.5. The school also showed significant gains for several designated student groups that were measured against their own targets. However, the school did not make AYP because gains for one student group were not sufficient. This group—students with disabilities—showed gains of 9.3 points resulting in a final score of 23.6 points, short of its growth target of 28.6. Figure 10 compares this school’s baseline, target, and first cycle results for the school as a whole and for selected student groups. Massachusetts has designed a model that can measure progress toward the key goal of NCLBA by setting growth targets for each year until all students are proficient in 2014. Schools like the one mentioned above can get credit for improving student proficiency even if, in the short term, the requisite number of students have yet to reach the current status model proficiency targets. The model also measures whether achievement gaps are closing by setting targets for designated student groups, similar to how it sets targets for schools as a whole. Schools that increase proficiency too slowly—that is, do not meet status or growth targets—will not make AYP. Tennessee developed a different model that also measures progress toward the NCLBA goals of universal proficiency and closing achievement gaps. Tennessee created a new version of the model it had been using for state purposes to better align with NCLBA. Referred to as a projection model, this approach projects individual student’s test scores into the future to determine when they may reach the state’s status model proficiency targets in the future. This model was accepted as part of Education’s pilot project, allowing the state to use it to make AYP determinations in the 2005-2006 school year. In order to make AYP under this proposal, a school could reach the state’s status model targets by counting as proficient those students who are predicted to be proficient in the future. The state projects scores for elementary and middle school students 3 years into the future to determine if they are on track to reach proficiency, as follows: fourth grade students projected to reach proficiency by seventh grade, fifth grade students projected to reach proficiency by eighth grade, and sixth, seventh, and eighth grade students projected to reach proficiency on the state’s high school proficiency test. These projections are based on prior test data and are not based on student characteristics. Also, the projections are based on the assumption that the student will attend middle or high schools with average performance (an assumption known as average schooling experience), and allow the student’s current school to count them as proficient in the current year if they are projected to be proficient in the future. Tennessee estimated that of its 1,341 elementary and middle schools, 47 schools that did not make AYP using its status model would be able to make AYP under its proposed model that gives schools credit for students projected to be proficient in the future. At our request, Tennessee provided analyses for students in several schools that would make AYP under the proposed model. To demonstrate how the model works, we selected students from a school and compared their actual results in fourth grade (Panel A) with their projected results for seventh grade (Panel B) (see figure 11). Some students who were not proficient based on their scores in 2004-2005 were projected to be proficient by the time they reach later grades. For example, student A did not score at the proficient level in fourth grade but was projected to score at the proficient level in seventh grade. The state has proposed to determine whether schools make AYP by using the percentage of students who are projected to be proficient (like student A) in the future, instead of the percentage of students presently proficient. For example, if 79 percent of an elementary school’s students are projected to be proficient on future math tests, the school will make AYP for the state’s 79 percent target in the 2005-2006 school year, regardless of the percentage of students in that school that are currently proficient. Tennessee’s proposed model can also measure achievement gaps. Under NCLBA, a school makes AYP if all student groups meet the state proficiency target. For example, a school could have a 20 percentage point gap for one group if, for example, 59 percent of students with limited English proficiency were proficient compared to 79 percent of their peers. While results based on projections may show that achievement gaps are closing, gaps would actually be closed only if the projections were realized. Using these models to measure progress, states could recognize improvement by allowing some schools to make AYP even though the schools may have relatively low achieving students. These schools may have a long way to go before reaching 100 percent proficiency and will need to increase student proficiency at a faster rate than schools making AYP under a status model. If a school that receives funds under Title I is unable to sustain this rate of progress, it may have difficulty reaching universal proficiency by 2014. In addition, if a school that did not meet status model targets but made AYP by meeting growth model targets, its students may not qualify for additional assistance provided for by NCLBA. Schools that receive Title I funds and that do not make AYP for 2 consecutive years are identified for improvement. According to some school district officials, it may be helpful not to be identified for improvement because they can devise their own interventions instead of implementing school transfer programs or working with state-approved supplemental educational service providers. While delaying these interventions may disadvantage students in some Title I schools, reducing the number of schools identified for improvement could allow for greater concentration of dollars in the lowest-performing schools. In Massachusetts, of the 134 schools in the two districts we analyzed, 23 of the 59 schools that made AYP did so based on the state’s growth model even though they did not reach the state’s status model proficiency rate targets in 2003-2004. The state had its growth model approved by Education as part of its accountability plan and therefore was able to determine that these 23 schools made AYP. One of these schools served a high-minority, low-income population and missed the state proficiency target in English/Language Arts of 75.6 points for the school as a whole and for each of its student groups. For example, one student group, students with disabilities, scored 44.3 points, missing the target by 31.3. However, this school made AYP, because the school as a whole and each of its student groups had shown enough improvement to meet their growth targets—including the group of students with disabilities that improved by 6.8 points. In Tennessee—of the 1,341 schools for which the state made AYP determinations in the 2004-2005 school year—47 of the 353 schools (13.3 percent) that had not made AYP would do so if the state’s proposed projection model were applied. However, some of these schools have many other indicators of needing assistance. For example, one school that would be allowed to make AYP under the proposed model was located in a high-poverty, inner-city neighborhood. That school receives Title I funding, as two-thirds of its students are classified as economically disadvantaged. The school was already receiving services required under NCLBA to help its students. If it makes AYP 2 years in a row, these services may no longer be required. Additionally, estimates of future proficiency often rely on certain assumptions. In the case of Tennessee’s proposed model, a key assumption is that students would receive an average schooling experience in the years between when the data were measured and when the final projection is made. According to Tennessee officials, an average schooling experience is defined as one in which a student receives instruction in a school whose performance is the average of all schools in the state. To the extent that a student attends a school with performance that is significantly different from average, actual performance is likely to deviate from the estimates, rendering those estimates relatively less reliable. Moreover, by allowing a school to count students’ future proficiency in the current year, the Tennessee proposal may only be delaying a school’s inability to meet status model targets and forestalling needed assistance. States face challenges in implementing growth models that Education’s initiatives may help address. Challenges states face include the extent that states’ data and assessment systems will support the models, whether the models can generate valid and reliable results, and states’ expertise to use, manage and communicate results about growth. These challenges are generally similar to those faced by states in implementing status models but are accentuated, because growth models measure progress over multiple years and thus require more data and systems designed to track data over time. Education’s growth model pilot program and data system grants may make it possible for more states to meet AYP requirements using a growth model, but greater usage largely depends upon improving states’ data and assessment systems. One challenge states face in using growth models is the ability to collect comparable data over at least 2 years, a minimum requirement for any growth model. States must ensure that test results are comparable from one year to the next and possibly from one grade to the next, both of which are especially challenging when test questions and formats change. Depending on the type of model, states may incorporate scores from 2, 3, or even more prior years. Officials from 13 states that were implementing or considering the use of growth models told us that they need to consider their state’s ability to make comparisons from one year to the next before their model could be operational. Other states that are implementing new data systems or assessments may have to wait a few years before they have enough data to assess progress from one year to the next. For example, one of those state officials said that his state will need at least 3 years of test data in order to set realistic multiyear growth targets for its proposed growth model. Some states currently using growth models, such as Florida and Ohio, have been collecting and comparing student data for several years. A significant challenge to implementing growth models that use student- level data is the capacity to collect these data across time and schools. This capacity often requires a statewide system to assign unique numbers to identify individual students. At least 37 states have systems with unique numbers as of April 2006, according to officials with the Data Quality Campaign (a nonprofit organization that helps states improve data quality). Developing and implementing these systems is a complicated process that includes assigning numbers, setting up the system in all schools and districts, and correctly matching individual student data over time, among other steps. For example, school staff must have students’ unique numbers when students change schools. However, Education officials have cited cases of school staff assigning a new number for a student instead of locating the student’s original number. Additionally, peer reviewers for Education’s growth model pilot project cited concerns about the ability of 3 states to correctly match student data from year to year. Some states have contracted with outside organizations to assist them in establishing these systems. In addition, one model provides a “teacher effect score” as an estimate of the impact that individual teachers have on individual students’ academic achievement, thus requiring even more information. Ensuring data are free from errors is important for calculations using status models and growth models. Doing so is more important when using growth models, because errors in multiple years can accumulate, leading to unreliable results. Fourteen state officials cited concerns about the design and reliability of growth models in areas ensuring data accuracy and measuring progress. States also need greater research and analysis expertise to use growth models as well as support for people who need to manage and communicate the model’s results. For example, Tennessee officials told us that they have contracted with a software company for several years because of the complexity of the model and its underlying data system. Florida has a contract with a local university to assist it with assessing data accuracy, including unique student identifiers required for its model. In addition, states will incur training costs as they inform teachers, administrators, media, legislators, and the general public about the additional complexities that occur when using growth models. For example, administrators in one district in North Carolina told us that personnel issues are their main concerns with using growth models. Their district lacks enough specialists who can explain the state’s growth model to all principals and teachers in need of guidance and additional training. In an effort to address their limited capacity, district officials told us they have been collaborating with neighboring districts to share training resources regarding the state’s growth model. In November 2005, Education announced a pilot project for states to submit proposals for using a growth model—one that meets criteria established by the department—along with their status model, to determine AYP. Education officials told us that the department is conducting its pilot project under authority provided in the law that, upon request from a state, allows the Secretary to waive certain requirements in the NCLBA. While the NCLBA does not specify the use of growth models for making AYP determinations, the department started the pilot in part to gain information on how these models might help schools achieve the law’s key goals. According to Education officials, 7 states had already requested to use growth models for AYP determinations before the department invited states to submit growth model proposals. For the growth model pilot project, each state had to demonstrate how its growth model proposal met Education’s criteria, referred to as “core principles” outlined in its November 2005 announcement. While many of these criteria are consistent with the legal requirements of status models, tracking student progress over time and having an assessment system with tests that are comparable over time are new (see table 3). Twenty states submitted proposals to Education by the February 17, 2006 deadline. Education reviewed proposals from the 14 states that planned to make AYP determinations for the 2005-2006 school year and forwarded 8 of them for peer review. In May 2006, Education approved North Carolina and Tennessee to use their proposed growth models to make AYP determinations for the 2005-2006 school year. Education noted that those states met all of the department’s criteria, such as reaching the key NCLBA goals of universal proficiency and closing achievement gaps. Additionally, Education and peer reviewers noted that those states had many years of experience with data systems that support calculating results using growth models. The 6 states whose proposals had received peer review were invited to resubmit proposals in September 2006. Other states that had submitted proposals for the 2006-2007 school year, and those that had not previously submitted proposals were invited to do so by November 1, 2006, for potential implementation in the 2006-2007 school year. While Tennessee received unconditional approval to implement its proposed growth model, peer reviewers noted they were concerned that Tennessee’s use of “average school experience” is likely to result in inaccurate projections, especially for disadvantaged students. This is because many students attend schools in districts that are struggling, and the schools they are likely to attend 3 years out could provide them with a school experience that is markedly below average. For this reason, Education requested that the state, after it implements the model, provide data to compare actual results with its projections. North Carolina received approval as long as its system of standards and assessments was approved by July 1, 2006. Reviewers of the state’s proposal noted that the state proposed to average student results for calculating growth, instead of examining growth results of all students, in direct violation of Education’s criteria. According to Education, the state changed its original approach so that growth would account for all students and would not use averages. Six states had proposals that were peer-reviewed but not approved. The department cited a variety of reasons for not approving these proposals, including that they did not lead to universal proficiency by 2014, applied growth calculations to nonproficient students only (instead of all students), used a margin of error on individual test results that would likely lead to students’ being counted as proficient when in fact they were not, and proposed annually resetting growth targets. Education is allowing these states to resubmit their proposals for review later in 2006. If approved then, they can use growth models to make AYP determinations in the 2006-2007 school year. Approved states must report to Education the number of schools that made AYP on the basis of their status and growth models. Education expects to share the results with other states, Congress, and the public after it assesses the effects of the pilot. In addition to the growth model pilot project, Education announced in April 2005 a competition for grants for the design and implementation of statewide longitudinal data systems. While independent of the pilot project, states with a longitudinal data system—one that gathers data on the same student from year to year—will be better positioned to implement a growth model than they would have been without it. Many states applied to participate in the growth model pilot project or received a grant (see table 4). Longitudinal data systems link data, such as test scores and enrollment patterns, of individual students over time. Education intended the grants to help states generate and use accurate and timely data to meet reporting requirements, support decision making, and aid education research, among other purposes. Education received applications from 45 states for the 3-year grants, and in November 2005, Education awarded a total of $52.8 million in grants to 14 states. States receiving grants must submit annual and final reports on the status of the development and the implementation of these systems. Education plans to disseminate lessons learned and solutions developed by states that received grants. While status models provide a snapshot of academic performance, growth models can provide states with more detailed information on how schools’ and students’ performance has changed from year to year. Growth models can recognize schools whose students are making significant gains on state tests but are still not proficient and may provide incentives for schools with mostly proficient students to make greater improvements. Educators can use the growth models of individual students to tailor interventions to the needs of particular students or groups. In this respect, models that measure individual students’ growth provide the most in- depth and useful information, yet most of the models currently in use are not designed to do this. Through its approval of Massachusetts’ model and the growth model pilot program, Education is proceeding prudently in its effort to allow states to use growth models to meet NCLBA requirements. Education is allowing only states with the most advanced models that can measure progress toward NCLBA goals to use the models to determine AYP. If schools are allowed to make AYP by getting credit for growth, some lower-performing schools will make AYP and the opportunity for school improvements the federal law prescribes to help students may be missed. However, if schools that show the most growth but do not meet status model targets are permitted to make AYP, states could target Title I school improvement on their lowest-performing schools. By proceeding with a pilot project with clear goals and criteria and by requiring states to compare results from their growth model with status model results, Education is poised to gain valuable information on whether or not growth models are overstating progress or whether they appropriately give credit to fast-improving schools. We obtained written comments on a draft of this report from the Department of Education. Education’s comments are reproduced in appendix III. Education also provided additional technical comments, which have been included in the report as appropriate. Education commented that it appreciates our concluding observation that the department “is poised to gain valuable information on whether or not growth models are overstating progress or whether they appropriately give credit to fast-improving schools.” Education expressed concern that the definition of growth models used in the report may confuse readers because it is very broad and includes models that compare changes in scores or proficiency levels of schools or groups of students. To inform its pilot project, Education used research that defines the term “growth model” to refer to models that track the growth of individual students. For the purposes of this report, we defined growth models to include models that track growth of schools, groups of students, and individual students over time. While we acknowledge that some research exists to define growth models as tracking the same students over time, other research exists to show that there are different ways of classifying models that states use or could potentially use. As such, the definition used in this report reflects the variety of approaches states are taking to measure academic progress. As agreed with your staff, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Education and other interested parties. We will also make copies available to others upon request. In addition, the report will be made available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix IV. To address the objectives of this study, we used a variety of methodological models. We interviewed experts in the field of measuring academic achievement as well as state, district, and school officials. We also reviewed documentation from states’ Web sites, and examined published studies that detailed characteristics and policy issues of states’ models. We conducted a series of interviews in selected states with officials who had a variety of experiences and viewpoints on growth models. In four of those states—California, Massachusetts, North Carolina, and Tennessee—we interviewed officials at the state, district, and school levels so that we could obtain a variety of perspectives on growth models. We selected those states for in-depth interviews based on diverse characteristics of their respective models, all of which were in place prior to the No Child Left Behind Act of 2001 (NCLBA). To address the first objective, we surveyed state education agencies in the 50 states, the District of Columbia, and Puerto Rico and reviewed documentation from states’ accountability workbooks. States reported to us whether they were using or considering the use of a growth model to measure academic achievement. The surveys were conducted using self- administered electronic questionnaires sent in an e-mail to all 52 states beginning January 13, 2006. We closed the survey on March 16, 2006, after the 51st respondent had replied. Puerto Rico did not complete the survey. The survey asked respondents to indicate, first, whether the state was currently using a growth model. GAO classified school-level models, like improvement models, as growth models for the purposes of this report. Some restrict the use of the term “growth models” to refer only to those that measure changes for the same group of students or individual students over time (see, for example, Council of Chief State School Officers, Policymakers’ Guide to Growth Models for School Accountability: How do Accountability Models Differ? Washington, D.C.: Oct. 2005). GAO included school-level models in this study to provide a broader assessment of options that may be available to states. If the state was using a growth model, we asked about its characteristics, whether the state was considering use of an additional model, whether the state planned to apply to Education’s growth model pilot program, and how the results from its model were used. If the state was not using a growth model, we asked whether it was considering doing so. We also asked about characteristics of the model under consideration and about key issues that must be addressed in order for it to be implemented. In some cases, we asked additional questions in e-mails and in phone interviews. The other methods we used to learn about states’ models included reviewing documentation from states’ Web sites and examining published studies that detailed characteristics and policy issues of states’ models. To address the second objective, we analyzed data from selected schools from two states, Massachusetts and Tennessee. These states were chosen based on a variety of factors, including expert recommendation, their use of different growth models, geographic diversity, and data availability. Within these states, we selected schools that were in urban, suburban, and rural areas. For Massachusetts, for one urban district and one suburban district, we selected the median school (as measured by the schools’ index values) among schools that had shown growth but had not made adequate yearly progress in the 2004-2005 school year. For Tennessee, for one urban district and one rural district, we selected schools that were used in the state’s growth model pilot project proposal. State officials from Massachusetts provided individual student data to GAO from the two selected school districts. GAO reviewed the state’s adequate yearly progress and growth model calculations and replicated school-level index and calculations from student and statewide data. State officials from Tennessee provided analyses its contractor had performed, also using individual student data. In both cases, GAO conducted an assessment of the reliability of these data and found the data to be sufficiently reliable for illustrating how growth models measure progress toward key goals of NCLBA. These assessments included electronic testing of data fields and interviews with state officials and in Tennessee’s case, the contractor as well. These interviews consisted of questions regarding the history of the data system, system audits and security, and possible threats to the systems, among other topics. GAO’s assessments also included reviews of documentation regarding the data systems. To address the third objective, we used data from the survey and information provided to us by Education and state officials. We reviewed documentation related to Education’s growth model pilot project and proposals submitted by several states. We interviewed Education and state officials about the pilot project, including criteria for selection and processes for review and approval. We conducted our work between June 2005 and May 2006 in accordance with generally accepted government auditing standards. The tables below provide specific information on characteristics of states’ growth models (as of the 2005-2006 school year), as reported on the survey. This information includes the grades in which growth models were reported, the level at which growth models were reported, the measures of achievement used to determine growth in test scores, and the characteristics of the assessments used to compare students’ test scores. States using growth models varied as to whether or not they used test scores from consecutive grades. Seventeen states reported using growth models in consecutive grades, while 9 states reported using them in nonconsecutive grades. For example, Tennessee uses test scores from grades 4 through 12, while Vermont uses grades 5, 8, and 10. Whether states used test scores from consecutive grades may depend on the type of model they used. The states that reported measuring individual student growth used test scores in consecutive grades (for example, grades 3 through 12 or 4 through 10). In contrast, the 19 states that use school-level information in their growth model calculations varied in the combination of grades they used in their models: 11 of those 19 states used growth models in three or more 8 used a variety of grade combinations. For each state with a growth model, table 5 lists the grades in which the state reports school growth and indicates whether the model measures individual student growth. States with growth models reported results for schools but varied in terms of reporting results at other levels, such as the individual student or school district. Table 6 lists the different levels at which states with growth models reported results. The measure of achievement in growth models indicates the methods states use to compare individual and group scores to determine the amount of growth. Table 7 outlines the measures that each state with a growth model used to determine how growth is reported. Growth models rely on data from state proficiency tests and measure growth with a variety of characteristics as shown in table 8. Blake Ainsworth (Assistant Director), Jason Palmer (Analyst-in-Charge), and Dan Alspaugh (Analyst-in-Charge) managed the assignment. Karen Febey, Shannon Groff, and Robert Miller made significant contributions to this report, in all aspects of the work. Kathy Larin, Harriet Ganson, Lise Levie, Beth Morrison, and Rachael Valliere provided analytic assistance. Luann Moy provided support with the survey. Anna Maria Ortiz and Beverly Ross provided analytic assistance with measuring school results related to key NCLBA goals. Jim Rebbe provided legal support and Mimi Nguyen developed the report’s graphics. No Child Left Behind Act: Improved Accessibility to Education’s Information Could Help States Further Implement Teacher Qualification Requirements. GAO-06-25. Washington, D.C.: Nov. 21, 2005. No Child Left Behind Act: Education Could Do More to Help States Better Define Graduation Rates and Improve Knowledge about Intervention Strategies. GAO-05-879. Washington, D.C.: Sept. 20, 2005. No Child Left Behind Act: Most Students with Disabilities Participated in Statewide Assessments, but Inclusion Options Could Be Improved. GAO-05-618. Washington, D.C.: July 20, 2005. Charter Schools: To Enhance Education’s Monitoring and Research, More Charter School-Level Data Are Needed. GAO-05-5. Washington, D.C.: Jan. 12, 2005. No Child Left Behind Act: Education Needs to Provide Additional Technical Assistance and Conduct Implementation Studies for School Choice Provision. GAO-05-7. Washington, D.C.: Dec. 10, 2004. No Child Left Behind Act: Improvements Needed in Education’s Process for Tracking States’ Implementation of Key Provisions. GAO-04-734. Washington, D.C.: Sept. 30, 2004. No Child Left Behind Act: Additional Assistance and Research on Effective Strategies Would Help Small Rural Districts. GAO-04-909. Washington, D.C.: Sept. 23, 2004. Special Education: Additional Assistance and Better Coordination Needed among Education Offices to Help States Meet the NCLBA Teacher Requirements. GAO-04-659. Washington, D.C.: July 15, 2004. Student Mentoring Programs: Education’s Monitoring and Information Sharing Could Be Improved. GAO-04-581. Washington, D.C.: June 25, 2004. No Child Left Behind Act: More Information Would Help States Determine Which Teachers Are Highly Qualified. GAO-03-631. Washington, D.C.: July 17, 2003. Title I: Characteristics of Tests Will Influence Expenses; Information Sharing May Help States Realize Efficiencies. GAO-03-389. Washington, D.C.: May 8, 2003. | The No Child Left Behind Act (NCLBA) requires that states improve academic performance so that all students reach proficiency in reading and math by 2014 and that achievement gaps close among student groups. States set annual proficiency targets using an approach known as a status model, which calculates test scores 1 year at a time. Some states have interest in using growth models that measure changes in test scores over time to determine if schools are meeting proficiency targets. To determine the extent that growth models were consistent with NCLBA's goals, GAO assessed (1) the extent that states have used growth models to measure academic achievement, (2) the extent that growth models can measure progress in achieving key NCLBA goals, and (3) the challenges states may face in using growth models to meet adequate yearly progress (AYP) requirements and how the Department of Education (Education) is assisting the states. To obtain this information, we conducted a national survey and site visits to 4 states. While growth models are typically defined as tracking the same students over time, GAO used a definition that also included tracking schools and groups of students. In comments, Education said that this definition could be confusing. GAO used this definition of growth to reflect the variety of approaches states were taking. Twenty-six states were using growth models, and another 22 were considering or in the process of implementing growth models, as of March 2006. States were using or considering growth models in addition to status models to measure academic performance and for other purposes. Seventeen states were using growth models prior to NCLBA. Most states using growth models measured progress for schools and for student groups, and 7 also measured growth for individual students. States used growth models to target resources for students that need extra help or award teachers bonuses based on their school's performance. Certain growth models can measure progress in achieving key NCLBA goals. If states were allowed to use these models to determine AYP, they might reduce the number of lower-performing schools identified for improvement while allowing states to concentrate federal dollars in the lowest-performing schools. Massachusetts sets growth targets for schools and their student groups and allows them to make AYP if they meet these targets, even if they do not achieve state-wide goals. Some lower-performing schools may meet early growth targets but not improve quickly enough for all students to be proficient by 2014. If these schools make AYP by showing growth, their students may not benefit from improvement actions provided for in the law. States face challenges measuring academic growth--such as creating data and assessment systems to support growth models--that Education's initiatives may help address. The ability of states to use growth models to make AYP determinations depends on the complexity of the model they choose and the extent that their existing data systems meet requirements of their model. Education initiated data grants to support state efforts to track individual test scores over time. Education also started a pilot project for up to 10 states to use growth models that met the department's specific criteria to determine AYP. Education chose North Carolina and Tennessee out of 20 states that applied. With its pilot project, Education may gain valuable information on whether growth models overstate progress or appropriately credit improving schools. |
Although the tsunami’s effects were concentrated in the countries closest to the earthquake’s epicenter in the Indian Ocean, about 100 miles off the coast of Sumatra, it also destroyed communities along some coastlines thousands of miles away. A year later, in December 2005, more than 40,000 persons were still listed as missing and tens of thousands remained in temporary housing. Figure 1 shows the most-affected countries, the numbers of people dead, missing, and displaced, and the estimated damage as a result of the tsunami. Responding to the magnitude of the disaster, the international donor community, including the United States, pledged approximately $13.6 billion to assist with tsunami relief and reconstruction efforts in all of the affected countries. National governments and the European Union pledged $6.2 billion of this amount (45 percent), private individuals and companies pledged $5.1 billion (38 percent), and international financial institutions pledged $2.3 billion (17 percent). These funds are being provided to a wide range of entities involved in implementing relief and reconstruction efforts (see fig. 2). USAID and the U.S. Department of Defense (DOD) and its component services provided immediate assistance to tsunami survivors and to the governments of many of the affected countries, largely completing these efforts by the end of 2005. USAID’s Office of U.S. Foreign Disaster Assistance (OFDA), Office of Transition Initiatives (OTI), and other USAID offices assisted survivors by providing food, water, temporary shelter, and other critical needs. Soon afterward, USAID initiated economic reactivation projects, such as paying people to remove debris in many affected areas. USAID’s emergency relief budget totaled approximately $101 million, including $32 million in Indonesia and $47 million in Sri Lanka. In addition, several DOD component services, including the U.S. Air Force and Navy, provided important emergency relief. For example, the Air Force rescued survivors and airlifted supplies, and a Navy hospital ship provided medical support. The supplemental tsunami appropriations law provided up to $226 million to reimburse DOD for its emergency relief activities. As of January 2006, DOD expended approximately $125 million (55 percent), including nearly $79 million for airlift and other flying costs and slightly more than $7 million for health- and medical-related services. Of the remaining $101 million, DOD had not completed a final reconciliation of $47 million, $40 million had been reprogrammed to help cover DOD’s costs in other disaster assistance efforts, and $14 million had lapsed. Table 1 shows U.S. tsunami emergency relief funds budgeted and expended. Of the $908 million appropriated for tsunami relief and reconstruction assistance, $581 million, or 64 percent, was budgeted for reconstruction and other postemergency relief activities. Of this amount, USAID was budgeted $496 million for reconstruction, and other U.S. agencies were budgeted $85 million for various other activities. (See table 2.) USAID’s planned reconstruction efforts in Indonesia and Sri Lanka include its signature projects, such as road and bridge construction; small-scale infrastructure projects, such as rebuilding schools and clinics; technical assistance for good governance; and transition assistance to improve survivors’ livelihoods and, in Indonesia, to build houses. In addition, USAID, through a transfer of funds to the Department of the Treasury, is funding debt relief to the governments of Indonesia and Sri Lanka; in exchange for deferral of a portion of their debt, both governments agreed to use the resources freed by debt deferral for relief and reconstruction-related programs (see app. II for a more detailed description of this aspect of the program). Table 3 shows the funds budgeted for ongoing and planned U.S. reconstruction assistance in Indonesia and Sri Lanka. Initial USAID plans call for completing its nonsignature activities in both countries by September 2007 and its signature projects in Indonesia and Sri Lanka by September 2009 and March 2008, respectively. Section 4102 of the supplemental appropriations act requires that, beginning in December 2005, the Secretary of State report to Congress every 6 months on tsunami-related progress, expenditures, and schedules. The report due in December 2005 was provided to Congress on March 22, 2006. Figure 3 shows USAID’s projected timeline for completing its tsunami reconstruction programs in Indonesia and Sri Lanka. USAID has obligated some, and expended small percentages, of its reconstruction funding in both countries and has initiated some of its planned activities. However, USAID may have difficulty completing its reconstruction projects—particularly its large-scale signature projects—within initial cost estimates and schedules because of, among other factors, increased demand and higher costs for construction materials and labor in both Indonesia and Sri Lanka. In Indonesia, USAID has begun many of the reconstruction projects that it plans to complete by September 2007. USAID has obligated about one-third, and expended a small percentage, of the funding budgeted for reconstruction in that country. In addition, USAID has begun to design, and performed preliminary site work on, a 3-mile segment of its large-scale signature infrastructure project, a 150-mile paved road; however, because of a variety of factors, the overall road construction project may overrun cost and time estimates. Similarly, USAID is currently planning and designing its small-scale infrastructure projects and has begun its transition assistance projects, both of which may also exceed cost and schedule projections. As of January 31, 2006, USAID had obligated $111 million (32 percent) and expended $9 million (3 percent) of the $349 million budgeted for its reconstruction projects in Indonesia (see table 4). These activities include the signature road construction, small-scale infrastructure construction, technical assistance for good governance, and transition assistance. USAID awarded an initial contract and began work on a segment of its signature road construction project in Indonesia in August 2005, but, owing to various factors, the project may overrun initial cost estimates and schedules. The proposed project, budgeted at $245 million, consists of building a 150-mile paved two-lane road and more than 100 bridges and culverts along the western coast of Aceh Province on the island of Sumatra, from the provincial capital of Banda Aceh to the city of Meulaboh. The tsunami’s impact destroyed or badly damaged much of the original road, a vital transportation route for the region. USAID agreed to reconstruct the road to support the Indonesian government’s overall reconstruction strategy, with the goal of helping to restore the economic strength of the area and promoting the redevelopment of the affected communities. According to an Indonesian government report, the road is key to revitalizing the economy of Aceh Province and to successfully initiating other reconstruction efforts. Figure 4 shows the approximate route of the planned road and photos of damage caused by the tsunami. USAID plans to design and construct the signature road in three distinct phases, with separate contracts for each phase. USAID also entered into interagency agreements with USACE for technical support. In early 2005, USACE and USAID conducted a preliminary assessment of site conditions and prepared the cost estimate that USAID submitted to Congress. The three phases for the signature road project are as follows (see app. III for more details): 1. Maintain a rehabilitated 50-mile temporary segment and construct a short segment. In August 2005, an Indonesian firm began maintaining a temporary 50-mile road segment, from Banda Aceh to Lamno, and designing and constructing a new 3-mile segment. This maintenance work is intended to ensure that the temporary segment, recently rehabilitated by the Indonesian army, remains passable until permanent construction is completed. 2. Design the signature road and supervise its construction. The second contract, for designing most of the 150-mile road and supervising construction work, was awarded to a U.S. firm in November 2005. The firm will supervise construction of the 3-mile segment, develop plans and specifications for the remaining 147 miles, and assist USAID in awarding and supervising construction of the signature road. 3. Construct the signature road. USAID plans to award a third contract by September 2006 to construct the 147-mile segment of the signature road. However, several factors—limited site information, rising materials and labor costs, and land acquisition issues—may increase the signature road project’s total costs and the difficulties of completing it within the intended time frame. Limited site information. A joint USAID-USACE team initially assessed conditions and developed a cost estimate for building the road. The estimate was based on using undamaged sections of the existing road and large segments of the temporary road placed by the Indonesian Army. A 20 percent contingency was included in the cost estimate because much of the road’s planned route was inaccessible, resulting in the team approximating site conditions and developing plans based on their assumptions. However, actual costs may still exceed the estimate because plans for routing the road have changed. According to USACE, current plans show that large segments of the road are now planned to be placed along new undeveloped routes—not along existing routes as initially planned. This change is expected to result in the need for more earthwork and related construction activities than originally anticipated. Rising costs. Increasing costs for materials and labor will also likely affect the road construction project’s overall cost. Demand for construction labor and materials has risen dramatically in Aceh Province and, according to USAID officials, will likely continue to rise. For example, a USAID official reported that the price of fuel oil used for construction equipment had risen more than 250 percent, from $0.17 per liter in February 2005 to $0.60 per liter in December 2005. According to the United Nations Development Program, posttsunami construction spending in and around Aceh is expected to increase fortyfold from pretsunami levels, from $50 million to $2 billion per year, and 200,000 additional workers will be needed to meet construction demands. Because the demand for skilled workers is greater than the number available, labor costs for reconstruction projects requiring skilled workers may rise. Land acquisition. Awarding the signature road construction contract by September 2006 may be difficult because of uncertainties regarding the road alignment and acquiring the needed right-of-way. The alignment of the new road will differ from the former road because, in some locations, the former roadbed is either submerged or was rendered otherwise inaccessible by the tsunami’s impact. According to a USAID official, the design contractor intends to propose a final road alignment to Indonesian authorities by mid-May 2006. Once the alignment is approved, the Indonesian government must coordinate with multiple jurisdictions to obtain land. USAID helped establish a technical steering committee with Indonesian government entities to facilitate land acquisition issues. However, progress depends on the Indonesian government’s timeliness in acquiring the land and establishing right-of-way. USAID expects to have more comprehensive cost estimates and schedule projections for the signature road project in June 2006. USAID has initiated other projects in Indonesia, some of which may exceed initial cost and time estimates. These projects encompass small-scale infrastructure, technical assistance for good governance, and transitional assistance aimed at restoring livelihoods. Small-scale infrastructure. USAID has begun reconstructing schools, clinics, water distribution systems, and small port facilities. Other projects will assist communities in preparing solid waste management plans, helping rebuild business districts, and constructing markets. Two planned projects include helping to build a teacher-training facility in Banda Aceh and rehabilitate the fishing industry by constructing port facilities, fishing vessels, and ice-making facilities. According to the USAID official responsible for overseeing the project, the teacher-training facility project is unlikely to begin as initially scheduled because of the time it has taken to plan and assess site conditions. He added that, even if the project does begin on time, the schedule is unlikely to be achieved, and because of rapidly escalating costs for materials and labor, the project is at risk of exceeding its budget. Technical assistance for good governance. USAID technical assistance and good governance projects in Indonesia are aimed at enhancing reconstruction efforts by facilitating the peace process. The projects include paying consultants to work with the Indonesian government’s Rehabilitation and Reconstruction Agency (BRR), the Audit Board of the Republic of Indonesia (BPK), the supreme audit institution, and local communities. Transition assistance. USAID has begun its transition assistance, including rebuilding shelters and helping restore livelihoods through microenterprise support. However, USAID may face difficulties meeting its shelter construction cost and schedule estimates. For example, 2 months after agreeing to build 1,000 houses for $4,500 each, the NGO implementing the project informed USAID that, because of escalating prices for fuel, building materials, and labor, the unit cost had risen more than 60 percent, to $7,000. The NGO has tentatively agreed to reduce its budget for other USAID-funded activities, such as upgrading an ice-making facility to assist the fishing industry, and will attempt to solicit private donations to meet its housing commitment. USAID has begun many of its longer term reconstruction efforts in Sri Lanka. By the end of 2005, the agency had obligated 100 percent of its funds and expended approximately 2 percent of reconstruction funding. USAID has started its signature project, which includes building a bridge and other infrastructure, addressing coastal management issues, and constructing vocational education facilities. However, primarily because of shortages of labor and materials, the project faces potential cost and schedule overruns even though it is currently slightly ahead of schedule. USAID has also begun its small-scale infrastructure, governance, and transition assistance projects. As of January 31, 2006, USAID had obligated all $85 million (100 percent) and expended about $2 million (2 percent) of the funds budgeted for longer-term reconstruction efforts in Sri Lanka (see table 5). USAID’s signature project in Sri Lanka began in September 2005, when the agency signed a contract with a major U.S. design and construction management firm. All components of the project—particularly the construction of a bridge at Arugam Bay in eastern Sri Lanka, where tourism is a vital component of the local economy—are consistent with the government of Sri Lanka’s strategic reconstruction plan. The signature project has three components (see fig. 5 for the planned locations). 1. Construction of a bridge and other infrastructure. These activities, largely focused on the Arugam Bay area of eastern Sri Lanka, include rebuilding a bridge spanning the bay and constructing a water treatment facility for nearby towns. Three ports in southern Sri Lanka will also be rehabilitated. 2. Provision of coastal management training. A management organization will provide training in construction and tourism-related skills that USAID considers essential to rebuilding and reactivating the economy in the Arugam Bay area. As of December 31, 2005, the contractor had completed some assessments and plans, but construction work had not yet begun. 3. Construction of vocational education facilities. This component of the project includes constructing two schools and reconstructing approximately eight others. As in Indonesia, several factors may hamper the completion of USAID’s signature project in Sri Lanka. Limited availability and rising costs of materials and skilled labor. During our visit to Sri Lanka in July 2005, we learned that, as in Indonesia, the increase in construction had led to limited availability of materials and labor and resulted in higher costs. For example, one USAID report noted that the cost of a brick had doubled and similar increases had occurred for cement and lumber. USAID included $2.2 million in the project budget to cover possible materials and labor increases, but USAID officials acknowledged this extra funding may be insufficient to cover costs. Lengthy planning and design of Arugam Bay bridge and other infrastructure. Although USAID signed the contract for the planning and design of the bridge in September 2005, construction of the bridge is not expected to begin until August 2006. Although USAID is slightly ahead of schedule, the length of time required to correctly plan and design the signature bridge project at Arugam Bay may challenge the agency’s efforts to complete the bridge by March 2008, the projected deadline. Also, construction of a water treatment facility experienced delays due to technical issues that arose during the preliminary assessment. USAID has made some progress in its other projects in Sri Lanka and expects to complete them by September 2007. These projects include small-scale infrastructure, technical assistance and good governance, and transition assistance aimed at encouraging economic activity. Small-scale infrastructure. USAID has leveraged other donors’ funds to increase the scope of some small-scale infrastructure projects, which include the following: USAID entered into a public-private alliance to build playgrounds, some of which include accessibility for the disabled. USAID contributed $0.5 million and attracted $1.5 million from two private organizations, increasing the number of playgrounds planned from 20 to 85. Another project involves rehabilitating community markets and restoring access to potable water. These activities are projected to be completed by mid-2006. Technical assistance and good governance. USAID will provide technical assistance and promote good governance in Sri Lanka. USAID has also budgeted funds to strengthen the Sri Lankan government’s audit capacity. In addition, USAID is providing funds to promote accountable local governance in tsunami-affected regions. Transition assistance. USAID is providing assistance to help tsunami survivors transition from camps to permanent communities. Activities under way include providing businesses with credit and vocational training. We visited a vocational school that USAID was rehabilitating and equipping with computers and found many students who were learning new skills; the principal reported that enrollment had also increased dramatically. To establish financial oversight of its reconstruction programs in Indonesia and Sri Lanka, USAID has augmented its standard financial controls with external and internal audits and efforts to strengthen local accountability. To establish technical oversight, USAID has reassigned and hired experienced staff, such as engineers, and acquired additional technical expertise through interagency agreements. However, USAID has not filled some positions that it considers critical to technical oversight. For its reconstruction programs in Indonesia and Sri Lanka, USAID plans to augment its standard financial controls for development assistance programs through additional internal and external audits. USAID also plans to strengthen Indonesia’s and Sri Lanka’s auditing capacities. In addition to its required financial controls, which include preaward surveys of prospective award recipients and financial audits, USAID plans to arrange for additional audits. According to agency officials, USAID intends to sign an agreement with the Defense Contract Audit Agency to concurrently audit funding for USAID’s signature road construction project in Aceh, Indonesia. USAID officials told us the agency is undertaking this work because of the additional risk inherent in large construction projects. USAID’s IG is also providing oversight of reconstruction programs. The IG is currently auditing the signature road construction project in Indonesia and plans to conduct three additional audits, two in Indonesia and one in Sri Lanka. The IG is undertaking this work with funding included in the May 2005 emergency supplemental legislation. USAID plans to strengthen the capacities of the BPK, the Indonesian government’s supreme audit institution. USAID will provide funding for technical assistance and training to the BPK to enhance its ability to audit donor funds administered by Indonesian government ministries. In Sri Lanka, USAID plans to strengthen the capacities of Sri Lankan government organizations. USAID has hired a consulting firm to work with the Sri Lankan Office of the Auditor General. USAID will also support Sri Lanka’s Commission to Investigate Allegations of Bribery and Corruption. This work will focus on training and capacity development and is intended to reduce corruption and ensure the proper use of reconstruction funds. Also, in April 2005, USAID participated in an international conference in Jakarta on the importance of managing tsunami assistance funds. The conference, funded by the Asian Development Bank and hosted by the BPK, was intended to highlight the importance of accounting for the large amounts of tsunami reconstruction funds. The conference was attended by representatives of donor countries’ supreme audit institutions, including GAO, and representatives of recipient countries, including Indonesia and Sri Lanka. To establish technical oversight for its reconstruction programs in Indonesia and Sri Lanka, USAID has relocated experienced staff, plans to hire other staff locally, and has acquired additional expertise through agreements with other U.S. agencies. However, it has not filled all needed technical oversight positions. In Indonesia, USAID reassigned two experienced engineers to share responsibilities as the cognizant technical officers and an experienced project manager to assist with the signature road project. A USAID engineer was reassigned and another hired to work in Sri Lanka to oversee the signature infrastructure projects. USAID also plans to hire an additional engineer locally when construction in Sri Lanka commences. In addition, USAID has acquired expertise through three interagency agreements with USACE, totaling $2.9 million, to provide technical assistance for its signature projects, develop scopes of work and cost estimates, and conduct environmental reviews in Indonesia and Sri Lanka. USACE efforts to date include assembling a team that assessed the existing conditions, developed cost estimates, prepared acquisition plans, and performed short-term on-site project management in planning the signature projects. Under the most recent interagency agreement, USACE is to provide technical assistance to USAID in Indonesia through the award of the road construction contract, expected in September 2006. As of March 2006, USAID had not filled several positions critical to implementing its construction activities in Indonesia and Sri Lanka. Although USAID hired an engineer to oversee the signature road construction project in Indonesia, the engineer was not expected to begin work until May 2006. In addition, USAID had added two of the three engineers needed to oversee infrastructure construction activities in Sri Lanka. In implementing its tsunami reconstruction programs in Indonesia and Sri Lanka, USAID faces several key challenges, some of which it has taken steps to address. These include working in regions with long-standing civil conflicts, coordinating with host governments and NGOs, and ensuring adequate management of regular programs. Long-standing civil conflicts could affect USAID’s ability to complete reconstruction projects within projected time frames in Indonesia, despite recent advances in a peace process, and have limited USAID’s ability to provide assistance in some tsunami-affected regions in Sri Lanka. Owing to a 30-year conflict between a separatist group and the Indonesian government, the entire province of Aceh, Indonesia, was under a state of emergency prior to the tsunami and access by outsiders was limited. However, within days of the disaster, the Indonesian government lifted the state of emergency to allow access by donors and relief organizations. In August 2005, the separatists signed a peace accord, which both sides appear committed to honoring. However, an NGO monitoring the accord has cautioned that the difficulties of ending the 30-year-old conflict should not be underestimated. To address this challenge, USAID is implementing peace-building initiatives in Aceh Province. For example, according to USAID officials, former combatants are working on construction crews rebuilding community water systems. USAID’s aim is to provide income-generating opportunities to former rebel soldiers, thereby strengthening the peace accord. A conflict between the Sri Lankan government and a separatist group, which began in 1983, has increased since the tsunami and could impact implementation of some USAID reconstruction programs. Since the tsunami, the number of violent incidents has risen dramatically, primarily in northern and northeastern Sri Lanka, which are largely under separatist control. USAID was not directly implementing development activities in these areas prior to the tsunami, and it is not planning any tsunami-related projects in these areas at present. USAID officials stated that they expect little disruption to most of its reconstruction efforts in other parts of the country. However, in the eastern Sri Lankan region near the separatist-controlled area, several USAID activities involving construction of small-scale infrastructure have been delayed because of increased violence. USAID officials stated that the signature construction project could also experience delays due to the conflict. As in Indonesia, USAID has incorporated peace-building initiatives into some of its Sri Lankan tsunami reconstruction efforts. One such project, implemented by USAID’s Office of Transition Initiatives (OTI), promotes participation by people of different ethnicities and religions by requiring that they work together toward a shared goal, such as rehabilitating a school. USAID has also dedicated $2.5 million for a reconciliation program in which community members will be trained in mediation skills. USAID has encountered challenges in coordinating its reconstruction efforts with the governments of both countries. In addition, USAID has faced coordination problems with NGOs. To address these challenges, USAID has taken steps to improve coordination, avoid duplication of efforts, and minimize gaps in providing assistance to survivors. USAID has faced challenges coordinating its reconstruction activities with the Indonesian government. In April 2005, the Indonesian government established the BRR to coordinate the international response to the tsunami. Since its creation, BRR has used the Indonesian government’s master plan for reconstruction to attempt to control and track organizations involved in reconstruction and has created a publicly accessible database that, according to USAID, is expected to be fully operational by mid-2006. However, according to USAID officials, BRR lacks the capacity for effectively registering donors and coordinating projects. The lack of coordination has resulted in the overlapping of USAID projects with other donors’ projects and in gaps in aid to survivors. A USAID official told us that, in one instance, BRR approved similar water and sanitation project proposals submitted by USAID and an international NGO. USAID negotiated directly with the NGO over which agency would carry out the project and eventually resolved the differences without BRR involvement. In addition, a United Nations official told us that many donor organizations are providing assistance to communities along the coastal road near the capital city of Banda Aceh but that survivors in numerous harder-to-reach areas down the coast and on nearby islands have received little or no aid. To strengthen BRR’s capacity to coordinate and oversee reconstruction efforts, USAID is providing technical assistance and training. However, according to USAID officials, until BRR is able to fully develop its capacities, USAID and other donor organizations will face difficulty in coordinating projects and outreach. Organizational inefficiency and policy shifts in Sri Lanka have led to coordination problems for USAID. In January 2005, the Sri Lankan government created a Task Force for Rebuilding the Nation (TAFREN), charging it with assessing needs and donor coordination. The organization was expected to operate for 3 to 5 years. In its first months of operation, TAFREN developed a needs assessment that drew on World Bank, Asian Development Bank, and other organizations’ information and analyses. TAFREN used the assessment to attempt to avoid duplication but lacked the capacity to ensure that donors registered and coordinated with TAFREN. In addition, with donor support, TAFREN began work on a publicly accessible database to track reconstruction projects that is expected to be functional by mid-2006. In November 2005, the newly elected Sri Lankan president disbanded TAFREN and announced the creation of a new coordination mechanism, further increasing potential coordination challenges; however, development of the database is continuing. When we visited Sri Lanka in July 2005, USAID officials told us that TAFREN had taken little action to coordinate donor efforts. They added that TAFREN had been slow to react and lacked decision-making authority. Nonetheless, USAID moved forward with some projects and kept TAFREN aware of its activities. In addition, the Sri Lankan government’s inconsistent policies on rebuilding in coastal areas have affected the progress of some USAID reconstruction programs. Soon after the tsunami, the President of Sri Lanka announced that the Sri Lankan government would begin to enforce a valid, but previously unenforced, law that banned construction within a 100- to 200-meter coastal “buffer zone.” This policy has affected the progress of some USAID projects, such as rehabilitating community markets and building schools. In late 2005, the Sri Lankan government began allowing construction in certain coastal areas, but as of late 2005, many survivors were still awaiting approval to rebuild their homes. In both Indonesia and Sri Lanka, USAID has encountered challenges in coordinating with some of the scores of NGOs operating in the countries since the tsunami. After the disaster, many NGOs received large amounts of private donations, enabling them to conduct their work without funding from bilateral and multilateral organizations. As a result, some NGOs began implementing reconstruction projects with minimal coordination with such organizations or with the host governments. In Indonesia, coordination with NGOs was particularly difficult during the emergency relief phase but has generally improved since the establishment of BRR, which currently permits only approved NGOs to participate in reconstruction projects. However, with limited resources, BRR cannot be sure it is aware of all NGOs activities. For example, according to UN officials, an international NGO constructed new houses and water and sanitation systems near the Indonesian coastline without coordinating with the Indonesian government or other donors to ensure that the housing could be connected to local water and sanitation infrastructure. Because of tsunami-altered water tables and topography in some areas, those communities’ sanitation systems overflowed during certain tidal conditions, inundating the area with untreated sewage. Coordination with NGOs in Sri Lanka has also been problematic, despite TAFREN’s efforts. For example, several NGOs and private donor organizations provided new fishing boats to fishermen. However, according to a bilateral donor official, several communities received too many fishing boats, and as a result, some coastal areas were depleted of large numbers of fish. On the other hand, coordination in southern Sri Lanka has been more effective than in other parts of the country. There, USAID, NGOs, and other donors agreed that certain organizations would have responsibility for different districts or for different types of assistance, such as housing. Coordination meetings are normally held weekly and TAFREN officials periodically attended. The urgency to quickly plan and implement USAID’s tsunami-related program activities in Indonesia and Sri Lanka may affect the management of some of non-tsunami-related projects. In Indonesia, USAID officials are concerned that the focus on tsunami reconstruction activities in Aceh could limit oversight of regular programs, leading them to rely more heavily on information provided by implementing partners. To mitigate this potential challenge, USAID added two direct-hire U.S. staff to fill two key positions in Aceh. In Sri Lanka, USAID is experiencing similar challenges. For example, a USAID activity to reconstruct small-scale infrastructure was suspended so that staff could focus on the tsunami relief. Later, the program was reactivated, although USAID did not add staff. As a result, USAID’s monitoring of some of regular program activities diminished. USAID reported that it reduced its efforts to involve the community in the program, resulting in repeated additional visits to ensure the program’s successful completion. To address this issue, USAID hired additional staff to ensure that ongoing programs are not neglected. The U.S. government has played an important role in helping Indonesia and Sri Lanka recover from the devastating 2004 tsunami. USAID and other agencies provided immediate assistance to survivors and work has begun on several high-profile infrastructure projects. However, since USAID made its initial projections in the spring of 2005, materials, labor, and fuel costs have increased substantially in both countries. In addition, changes to project scope and ongoing design work for key construction efforts may reveal actual conditions that differ from initial assessments, potentially leading to higher than planned costs. This information suggests that the cost contingencies included in the initial estimates may be insufficient. Congress needs current information on projected costs and schedules to provide appropriate oversight. On the basis of our initial review of USAID’s design and implementation of its tsunami reconstruction programs in Indonesia and Sri Lanka, especially regarding its signature road project in Indonesia and bridge project in Sri Lanka, we recommend that the Secretary of State, in the department’s required semiannual report to Congress due in June 2006, provide updated cost estimates and schedules obtained from USAID. If the updated information differs substantially from initial projections, the report should also include alternative cost estimates, schedules, and project scopes and the need for additional sources of funding, if necessary. At our request, USAID and the Department of State provided written comments and technical suggestions and clarifications on a draft of this report. (See app. IV for State’s written comments and app. V for USAID’s written comments.) USAID stated that the report findings accurately describe the tsunami program situation and the potential broad challenges for achieving its reconstruction goals. USAID also provided information on additional steps the agency intends to take to mitigate the potential for increased costs and schedule delays, as well as an explanation of how it obligates funds, which we incorporated into the report. The Department of State agreed to fully implement our recommendation. We have also incorporated technical suggestions and clarifications from USAID and State, as appropriate. We also requested comments from the U.S. Army Corps of Engineers and the Department of the Treasury. Although neither provided written comments, both provided technical suggestions and clarifications that we have incorporated, as appropriate. We are sending copies of this report to interested congressional committees as well as the Administrator, USAID; Commander, U.S. Army Corps of Engineers; and the Secretaries of State and the Treasury. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. We were directed to monitor the delivery of U.S. reconstruction assistance to the tsunami-affected countries through periodic visits. In this report, we review (1) USAID’s progress in providing longer-term reconstruction assistance in Indonesia and Sri Lanka, (2) the extent to which the U.S. Agency for International Development (USAID) has established financial and technical oversight for its tsunami reconstruction programs in those countries, and (3) any challenges that USAID faces in implementing the Indonesian and Sri Lankan programs and any steps the agency has taken to address these challenges. To determine the progress of USAID’s reconstruction programs in Indonesia and Sri Lanka, we met with officials of USAID’s Bureau for Asia and the Near East and Office of Foreign Disaster Assistance and with the U.S. Army Corps of Engineers (USACE). In addition, to examine issues involving the U.S. debt relief component of the assistance to Indonesia and Sri Lanka, we conducted work at the headquarters offices of U.S. Departments of State and the Treasury. We traveled to Indonesia in August and December 2005 and Sri Lanka in July 2005. In Jakarta and Banda Aceh, Indonesia, and in Colombo, Sri Lanka, we reviewed USAID’s strategies, work plans, and applicable contracts, grants, and cooperative agreements and discussed with USAID and other U.S. officials how their respective programs addressed reconstruction needs. During our visits to Indonesia, we reviewed USAID’s activities in tsunami-affected areas, including the $245 million, 150-mile signature road construction project in Banda Aceh. In many instances, we visited and photographed sites before the projects began, at locations where USAID-funded maintenance work was ongoing, or where USAID-funded construction had begun. During these trips, we interviewed representatives of contractors, nongovernmental organizations (NGO), government ministries, and other entities responsible for day-to-day project implementation. We also interviewed many of the intended recipients of U.S. assistance, asking about the tsunami’s impact on their homes, livelihoods, and communities and about the effectiveness of U.S.-funded projects in helping them rebuild infrastructure, restore their livelihoods, and obtain basic services. Finally, we reviewed prior GAO reports on USAID disaster assistance efforts. To assess USAID’s financial and technical oversight, we reviewed USAID’s financial procedures and discussed the procedures with cognizant USAID officials. In Indonesia, a licensed GAO professional engineer met with USAID and USACE engineers and other technical staff to discuss the level of technical oversight and planning. We also coordinated with USAID’s Office of the Inspector General in Washington, D.C., and the Philippines to minimize duplication of efforts and to share information. To determine the challenges that USAID faces in implementing its program, we discussed oversight procedures and financial systems with officials of host governments, multilateral and bilateral donors, and NGOs involved in reconstruction efforts. We also met with host government officials, including national and local officials, to discuss their procedures for ensuring that donor activities did not conflict or overlap and their views on donor coordination. We assessed the reliability of funding and expenditure data compiled and generated by USAID’s Office of the Controller in Washington, D.C., and by the USAID missions in Indonesia and Sri Lanka. We met with USAID officials to review the internal controls for the collection and review of data, comparing the consolidated reports with mission-specific reports, and discussed relevant data reliability issues with cognizant agency officials. In addition, we interviewed knowledgeable USAID officials about the systems and methodology they use to verify the completeness and accuracy of the data. Finally, we reviewed relevant reports from the USAID Office of the Inspector General and several GAO reports of USAID disaster reconstruction program funding since 1999. None of these sources noted any significant discrepancies or concerns about the reliability of USAID’s data. Based on our comparison of data generated from different USAID sources at USAID headquarters and mission, we found that the sources generally corroborated each other, increasing our confidence that the data were reliable. We determined that USAID’s funding and expenditure data were sufficiently reliable for our analysis. To make resources available for the Indonesian and Sri Lanka governments to address humanitarian and reconstruction needs after the tsunami, and at the request of these governments, the United States and other international donors agreed to defer the payment of some eligible debt the Indonesian and Sri Lankan governments were due to pay in 2005. Both countries agreed to use the debt relief to help recover from the tsunami’s extensive damage, estimated at $4.5 billion in Indonesia and $1.5 billion in Sri Lanka. With funding appropriated in the emergency supplemental legislation enacted in May 2005, the United States provided $20.1 million and $3.2 million to cover the U.S. budget costs of debt deferral for Indonesia and Sri Lanka, respectively. Using these funds, the United States rescheduled about $190 million in 2005 debt payments from Indonesia and about $40 million in 2005 debt payments from Sri Lanka. The U.S. debt relief agreements with Indonesia and Sri Lanka require independent outside evaluations to ensure that the countries comply with the terms of the agreements that the resources freed by the Paris Club debt consolidation and deferral will benefit directly the people affected by the tsunami. According to the Department of the Treasury, the benefit from the international debt rescheduling is $236 million for Indonesia and $34 million for Sri Lanka. According to our analysis, the net benefit of the debt rescheduling for Indonesia and Sri Lanka is about 9 percent and 11 percent, respectively, of the amount of the debts rescheduled. The United States will rely on periodic reports from each country’s regular consultations with the International Monetary Fund, the World Bank, and the Asian Development Bank to measure compliance with the agreements, according to Department of State officials. The officials stated that both countries are likely to meet their commitments to use the resources freed by the Paris Club debt consolidation and deferral to benefit directly the people affected by the tsunami. The total amounts of bilateral international debt rescheduled in 2005 for Indonesia and Sri Lanka were approximately $2.703 billion and $323 million, respectively. Indonesia did not seek or receive any debt deferral from multilateral creditors. Sri Lanka sought a debt deferral from the International Monetary Fund, which granted a 1-year extension for debt repayments of about $106 million for its repayment expectations due in 2005. Multilateral debt service in 2005 was $4.3 billion and $294 million, accounting for 55 percent and 48 percent of total debt service, before the debt reschedulings for Indonesia and Sri Lanka, respectively. Rescheduling international debt provided immediate budgetary savings for both countries in 2005, but both countries’ debt burden will increase when payments restart in 2006. Both countries agreed to repay the rescheduled debt in seven equal semiannual installments, starting December 1, 2006, and ending December 1, 2009. Table 6 shows the impacts of rescheduling all bilateral debt on the budgets of the governments of Indonesia and Sri Lanka for fiscal years 2005 through 2009. USAID developed plans to implement its signature project in three phases (see table 7). The following are GAO’s comments on the U.S. Agency for International Development’s letter dated March 30, 2006. 1. USAID states that it intends to use fixed-price contracts because the contracts provide the maximum incentive for the contractor to control costs and perform effectively in order to complete the work on time. We agree that fixed-price contracts can be effective in controlling costs by shifting performance risk to the contractor. However, as USAID also notes and we point out in the report, costs may increase due to other circumstances, such as site conditions being different than expected and the potentially lengthy process of acquiring land. As our recommendation indicates, it is important that Congress be kept informed of cost and expenditure information in order to effectively oversee expenditures of U.S. funds. 2. We modified the text of footnote 5 and added explanatory notes to tables 4 and 5 to reflect USAID’s comments regarding obligations and expenditures of funds. David Gootnick, (202) 512-4128 or [email protected]. In additional to the contact named above, Phillip Herr, George Taylor, Michael Armes, Ming Chen, Reid Lowe, Michael Maslowski, and Thomas Zingale made key contributions to this report. | In December 2004, an earthquake off the coast of Indonesia caused a tsunami that left more than 230,000 people killed or missing and presumed dead and an estimated $10 billion in damage in 12 countries. In May 2005, Congress appropriated $908 million for relief and reconstruction. U.S. emergency relief efforts budgeted at $327 million were nearly completed in December 2005. The U.S. Agency for International Development (USAID) plans to spend $496 million on longer-term reconstruction, focusing on Indonesia and Sri Lanka, with the remaining $85 million allocated to other U.S. agencies. GAO has been mandated to monitor USAID's reconstruction efforts. In this report, GAO describes USAID's (1) progress in Indonesia and Sri Lanka, (2) financial and technical oversight measures, and (3) implementation challenges. USAID has begun a number of reconstruction activities in Indonesia and Sri Lanka. As of January 31, 2006, approximately 8 months after Congress appropriated funding, USAID had obligated $111 million (32 percent) and expended $9 million (3 percent) of the $349 million budgeted for reconstruction in Indonesia, and it had obligated all and expended $2 million (2 percent) of the $85 million budgeted for reconstruction in Sri Lanka. However, rising prices of materials and labor in both countries may increase costs for many construction efforts, including USAID's "signature" projects, which are intended to generate greater visibility for U.S. assistance. In addition, revisions to initial assessments of site conditions may challenge USAID's ability to finish its signature project in Indonesia--a 150-mile road in Aceh Province--by September 2009, the estimated completion date. In Sri Lanka, the time needed to complete designs and plans may make it difficult to finish one part of USAID's signature project--a bridge at Arugam Bay--by March 2008, although this project is currently slightly ahead of schedule. USAID plans to complete most of its other reconstruction projects, such as building schools and restoring livelihoods, by September 2007. USAID has established financial and technical oversight for its tsunami recovery programs in Indonesia and Sri Lanka. For financial oversight, USAID plans to arrange a concurrent audit of the signature road project in Indonesia and strengthen Indonesian and Sri Lankan audit capacities. For technical oversight, USAID has begun to add staff to oversee its signature construction projects and has acquired additional construction engineering expertise from another U.S. agency. An additional engineer will start work in Indonesia in May 2006. In Sri Lanka, USAID has added two engineers to its staff and plans to hire an additional construction oversight engineer in April 2006, prior to beginning construction. In implementing its Indonesian and Sri Lankan reconstruction programs, USAID faces several broad challenges. These include working in regions with long-standing conflicts, coordinating with host governments and nongovernmental organizations, and ensuring that non-tsunami-related development assistance activities are not neglected. To address these challenges, USAID has taken actions such as engaging in peace-building initiatives, participating in regularly scheduled coordination meetings, and hiring and reassigning staff to assist with increased workloads. |
The need to transform the military services has been widely recognized in a number of DOD policy papers, reports, and strategy documents. The national security strategy, the national military strategy, the Secretary of Defense’s guidance to the services, the 1997 Quadrennial Defense Review, and the Chairman of the Joint Chiefs of Staff’s Joint Vision statements (2010 and 2020) all cite the need to transform U.S. armed forces to maintain military dominance in the new security environment. Over the last several years, the Navy has undergone some reorganization, shifted its science and technology funding, and undertaken a wide range of experiments and innovation activities. A key organization for carrying out the Navy’s transformation has been the Navy Warfare Development Command, which was established in June 1998 to develop new operational and warfighting concepts to plan and coordinate experiments based on new concepts and to develop doctrine. The Command has been preparing a capstone concept based on network centric warfare that is to serve as a guide for future naval operations. The Command has also planned and coordinated a series of major experiments involving the fleets to evaluate many of the concepts and technologies associated with network centric warfare. Before it established the Command, the Navy did not have an organization dedicated to operational experimentation. The Command’s fiscal year 2000 and 2001 budgets are about $45.3 million and $44 million, respectively. In fiscal year 2002, the Command’s budget is projected to decline to about $41.7 million. Almost half of each annual budget is allocated to experimentation-related activities. Two other organizations important to transformation are the Naval War College and the Chief of Naval Operations’ Strategic Studies Group. The college conducts war games that test concepts and potential technologies.Its close working relationship with the Navy Warfare Development Command provides an avenue for new concepts to be further evaluated and integrated into experimentation efforts. The Strategic Studies Group, comprised of a small group of senior Navy, Marine Corps, and Coast Guard officers, generate and analyze innovative and revolutionary naval warfighting concepts and reports directly to the Chief of Naval Operations. Recent studies have centered on attacking land targets from the sea, future surface ship deployments, new crewing concepts, and multitiered sensor grids. In 1999 the Navy reorganized its science and technology resources into 12 future naval capabilities to focus more sharply on the capabilities needed over the next 10-15 years. Senior Navy and Marine Corps officials lead integrated product teams that prioritize individual efforts in the capability areas. The Navy’s science and technology budget has remained relatively static over the last decade and has decreased as a percentage of its total budget. The Navy currently allocates about 35 percent of its science and technology budget to support its future naval capabilities. The Navy plans further refinements to its science and technology structure, including the possibility of adding or subtracting individual future naval capabilities. Appendix I provides further information on the future naval capabilities. Since March 1997, the Navy has also conducted nine fleet battle experiments. The experiments are assessed to determine which new operational concepts, tactics, and technologies prove workable and what follow-on experimentation to pursue. The Navy Warfare Development Command is also coordinating with other military organizations to jointly lease one or more dual hulled high-speed ships for a broad range of experiments. For 18 months starting in September 2001, the Navy will conduct a series of experiments to explore potential uses for such vessels, including amphibious lift, armament configuration, and helicopter operations. Appendix II provides examples of issues explored in the fleet battle experiments. Finally, the Navy conducts a wide range of innovation activities. For example, the Third Fleet has set aside a portion of its command ship, the U.S.S. Coronado, to test innovations related to command, control, communications, computers, and intelligence concepts. Appendix III provides some examples of these innovation activities. The Navy is conducting a variety of transformation activities: it is experimenting with new technologies, it has made some organizational changes, it has introduced the new network centric warfare concept, and it is pursuing a wide range of innovations. However, the Navy has not developed an overarching, long-term strategy that integrates these activities or that clearly defines transformation goals, organizational roles and responsibilities, timetables for implementation, resources to be used to achieve its transformation goals, and ways to measure progress toward those goals. In other words, the Navy does not have a strategic plan and roadmap for its transformation that shows where it wants to go; how it proposes to get there; and how transformation will be managed, funded, implemented, or monitored. The lack of a plan and roadmap has contributed to confusion within the Navy and DOD about what constitutes the Navy’s transformation. The adoption of an evolutionary approach to transformation has so far not led the Navy toward careful and full consideration of all the strategic, budgetary, and operational elements of transformation. Additionally, the Navy’s progress has been adversely affected by insufficient support for new organizations responsible for leading transformation efforts, limited conduct of long-term experiments, and a variety of Navy-wide innovation activities that are not well coordinated and tracked. There is no clear consensus on the precise definition, scope, or direction of Navy transformation. In discussions with Navy and DOD officials and outside defense experts, we found there was some confusion about what constitutes transformation and about the role of the network centric warfare concept, which is the centerpiece of the Navy’s transformation efforts. The Navy has not developed a plan that clearly identifies what transformation is and what its goals or components are. For example, although network centric warfare is clearly a fundamental concept for the Navy’s future operations, the Navy still has not made it clear how the concept fits in with its many ongoing transformation activities or with its overall transformation efforts, what effects the concept will have on the types and composition of forces, or how the concept’s many components will be integrated with each other or with those of the other services. The Navy plans to soon publish a capstone concept document for its future force. The concept document is expected to apply the tenets of network centric operations to the Navy’s vision statements and identify some of the capabilities required to implement these tenets. Navy Warfare Development Command officials believe the concept document is critical to the success of the Navy’s transformation, and they expect the concept document to be approved by the Chief of Naval Operations in the near future. Good management practices and the advice of defense experts both inside and outside the Navy suggest that a clear strategy is central to the success of transformation efforts. DOD and Navy officials and outside defense experts identified a number of benefits that can be obtained from strategic planning. Navy officials at headquarters and several commands stated that establishing an agreed-upon definition of transformation would be vital for explaining what constitutes transformation. Most Navy officials we spoke with believe that a strategic plan and roadmap would bring greater coherence to the Navy’s transformation efforts. A strategic plan and roadmap would also provide the Congress with a means to evaluate and make optimal decisions on the Navy’s transformation. The need for a strategic plan when attempting major organizational and operational changes, such as those the Navy is undertaking, has also been long recognized in the private sector as a best business practice. We discussed the need for a strategic plan and roadmap with a wide range of DOD and Navy officials and with outside defense experts, many of whom have been directly involved in advising DOD on military transformation. These individuals agreed that such a plan should clearly articulate the Navy’s transformation goals and objectives, priorities, specific responsibilities, and linkages with other organizations, as well as the scope of activities and the resources necessary to carry them out. These management tools should also identify the challenges and obstacles that need to be addressed and should include understandable, simple, and reasonable metrics to provide ways to gauge progress, provide information to decisionmakers, and make necessary changes. Some Navy officials expressed caution that such a plan should not dictate a particular force structure but rather provide the elements of the process to guide the transformation efforts. Appendix IV provides additional information on the key factors for successful transformation planning and management. The same officials and experts said that further complicating Navy transformation planning efforts is the absence of clearly articulated transformation guidance from the Secretary of Defense and the Chairman of the Joint Chiefs of Staff to the military services. The Secretary and the Chairman have provided only broad guidance on the direction and progress of military transformation and on the types of future capabilities required for transforming the military. The responsibility for clearly identifying priorities and developing an implementation plan for their transformations has been left to the individual services. However, it is widely recognized that the success of future joint operations requires careful joint planning and integration. Various organizations, including the Defense Science Board, have cited the need for the Secretary of Defense to provide clear guidance on transformation. In 1999, the Board called for an explicit strategy, or a master plan; a roadmap; and outcome-related metrics to assess progress. In its annual performance plan, issued pursuant to the Government Performance and Results Act of 1993, DOD identified the transformation of U.S. forces among its performance goals. The act requires federal agencies to clearly define their missions, set goals, link activities and resources to goals, prepare annual performance plans, measure performance, and report on accomplishments. However, we recently reported that two of the transformation’s three underlying metrics— procurement spending and defense technology objectives—do not provide a direct link toward reaching that performance goal. Without such metrics, DOD cannot adequately assess its progress toward transforming its forces for the 21st century. The Navy would be expected to provide input to such a DOD effort and should therefore have its own clearly articulated transformation plan. The Navy has adopted what it calls an evolutionary approach to transformation, meaning that its effort is more about incremental changes in its force posture than in its force structure. The Navy believes that this is an appropriate path to follow since it already is an expeditionary, self- sustaining, and mobile force with worldwide reach. What it needs to do, the Navy asserts, is to improve its expeditionary capabilities by focusing less on the types of ships in its force structure and more on linking them together through data networks—hence the network centric warfare concept. This evolutionary approach, however, has so far not led the Navy toward careful and full consideration of all the strategic, budgetary, and operational elements of transformation. Through its approach, the Navy has also allowed almost a decade to pass with slow progress in a number of key transformation areas. Without the benefit of an overarching strategic plan and roadmap, the Navy has not taken the steps necessary to explore the possibilities of long-term changes to its force structure and operations to adequately address near- and long-term security requirements within existing and projected fiscal parameters. There are at least three reasons why the Navy may need to adopt a more far-reaching and considered approach to its transformation: (1) it may not be able to recapitalize its existing forces at current shipbuilding rates, which might necessitate more fundamental changes in force structure and operations than it currently plans; (2) new operational concepts and technologies needed to operate in littoral areas may be coming into the force too slowly, given the increased importance of littoral operations recognized by the Navy; and (3) there are substantial technological challenges presented by network centric warfare that could take a long time and considerable effort to overcome. DOD in its comments to a draft of this report, stated that the evolutionary approach followed by the Navy for transformation was prudent and allowed the Navy to continuously improve its combat capabilities. It also stated that Navy transformation efforts, such as the Navy’s fleet battle experiment program, have not excluded consideration of innovative force structures. DOD attributed the majority of actual and perceived transformation shortfalls to the lack of an overarching strategic plan and roadmap rather than to the approach followed for transformation by the Navy. The Navy has not been building enough ships to maintain the roughly 300-ship force mandated by the 1997 Quadrennial Defense Review. The high costs of supporting the current force, the time needed to acquire new ships, and the prospect of a continued mismatch between fiscal resources and force structure requirements increase the urgency of planning for and carrying out transformation. Although we did not make an independent assessment of the funds needed to maintain a force of 300 ships and its associated inventory of aircraft and supporting infrastructure, the Congressional Budget Office has estimated that the Navy would require roughly $17 billion more each year for fiscal years 2001 through 2005 than it is currently expected to receive to sustain this force level. If current construction rates and funding levels remain the same, the Navy’s force could decrease to approximately 260 ships or lower after 2020. Navy officials believe they face even bigger challenges. As part of DOD’s July 2000 report on naval vessel force structure requirements, the Navy reported that its force needed to increase to about 360 ships over the next 15 to 20 years to better meet its total operational requirements and the national military strategy. The recent establishment of an Office of the Deputy Chief of Naval Operations for Warfare Requirements and Programs may help focus the Navy’s attention on analyzing the potential for changes that might be needed to address fiscal concerns as well as current and future force structure requirements. In addition, the President of the Naval War College was recently chosen by the Chief of Naval Operations to lead a task force to analyze the force structure implications of operating the Navy on approximately the same budget level it now has. A senior Navy headquarters official agreed that the shortfall in funding and the mismatch between requirements and resources are major drivers for transformation. But the official also acknowledged that the Navy’s evolutionary approach to transformation might not address its fiscal problems. The Navy has been slow in acquiring many of the capabilities that it needs to successfully conduct littoral operations. We recently reported on the Navy’s limited countermine, antisubmarine, and ship self-defense capabilities and the lack of credible surface fire support capabilities.Although the Navy has had acquisition programs under way to improve its capabilities in each of these areas for many years, we found progress has been slow. We also found that unless current efforts can be accelerated or alternatives developed, it will be another 10 to 20 years before the naval services have the capabilities they say they need to successfully execute littoral warfare operations against a competent enemy. Our ongoing reviews of Navy chemical and biological defense capabilities have found shortcomings in equipment and training for shipboard personnel and naval personnel ashore in high-threat areas. Such deficiencies could also seriously affect the Navy’s ability to operate successfully in littoral areas. The Navy faces significant challenges in developing the network centric warfare capability. Navy officials told us that they have only just begun to define and implement the concept and that making it operational involves significant challenges. Officials in the Navy’s operating forces expressed a lack of a clear understanding about what network centric warfare is and how it is expected to change operations and forces. Some elements such as the Cooperative Engagement Capability have recently deployed, while others are in the early stages of research and development and are years away from practical use. Most will rely on interoperability (compatibility with equipment used by the Navy and the other services) for their ultimate success. Yet the Navy does not have an implementation plan to integrate all the different elements. Several Navy and joint officials have indicated that some components require much more comprehensive planning and an integrated roadmap for their development. Others said that the Navy and the other services were not doing enough to ensure interoperability. The Navy has carried out several organizational changes aimed at moving transformation forward. But as with all of its other transformation activities, these changes have not been carried out within the context of an overarching strategy that clearly and authoritatively identifies roles and responsibilities of different bodies and stakeholders. Thus, even though the Navy Warfare Development Command was established primarily to direct the Navy’s transformation efforts, the Command has had difficulty building relationships with other Navy organizations and has not yet achieved the priority for resources needed to make it an effective focal point for transformation. Several important activities are underway at the Command. For example, it is pursuing a comprehensive review and reorganization of the Navy’s doctrine structure, and it is coordinating all major Navy fleet battle experiments as well as the Navy’s participation in joint experiments. Its work on the capstone concept document based on network centric warfare—the centerpiece of the Navy’s transformation activities—is nearing completion. It has also established a constructive working arrangement with the Naval War College and the Strategic Studies Group. The Command has had less success establishing itself as the Navy’s focal point for transformation and has sometimes faced resistance at the fleets and at Navy headquarters while trying to carry out its responsibilities. Atlantic and Pacific Fleet officials said that while they appreciate the intent of the Command’s work, fleet personnel sometimes see the Command’s experiments as disruptions to their everyday operations and do not fully understand how the experiments can benefit them. They explained that the fleets are focused more on immediate issues affecting operations and are therefore less receptive to activities that might be aimed at the Navy’s longer term interests. A number of senior Navy officials said that the Command has had difficulty promoting its concepts to the fleets because some fear that new concepts could threaten support and funding for existing programs. Part of the difficulty of building relationships with other Navy organizations is that the Command is just 3 years old, and its mission is not well known throughout the Navy. During our fleet visits, we found that with the exception of fleet battle experiments, the Command’s overall role, responsibilities, and relationships were not fully understood. Several senior Navy officials noted that the Command has not been afforded a high priority for staffing. For example, only 46 of its 60 authorized positions for military personnel were filled as of June 2001. The Command’s detachments at the Atlantic and Pacific Fleets have several important responsibilities, including providing support for experimentation, innovation activities, and concept and doctrine development and acting as the liaison between joint and fleet organizations and the Command. However, they have only a skeletal number of authorized staff to carry out these responsibilities, and even these positions have not always been fully staffed. An official of the Command’s Pacific Fleet detachment said that lack of personnel prevents the detachment’s staff from attending key meetings and making visits to Navy organizations throughout the region. Officials at the Command’s Atlantic Fleet detachment expressed similar limitations to involvement with organizations in that area. Additionally, the Command has been unable to assign a permanent representative to the U.S. Joint Forces Command to represent the Navy on joint experimentation issues. The Command has also had some difficulties with funding needed to support its activities. An official in the Command’s Pacific Fleet detachment told us the detachment has had to rely on other Navy organizations, such as the Third Fleet, to provide funds for basic support such as office space, telephones, heating, and lighting. Plans for prototyping of ships and other weapon systems will require additional funds over the Command’s current funding. Navy Warfare Development Command officials expressed concern that about 75 percent of the Command’s research and development budget for fiscal year 2002 will be spent to support its portion of one single experiment—the U.S. Joint Forces Command’s Millennium Challenge. To cover its other experimentation requirements, it will need to obtain additional funds from the Navy and other organizations with which the Command cooperates on experimentation projects. Recent organizational changes at Navy headquarters should help overcome some of these difficulties. The establishment of the Office of the Deputy Chief of Naval Operations for Warfare Requirements and Programs provides a clearer link between headquarters and organizations vital to transformation. This link may help increase the visibility of the Navy Warfare Development Command’s efforts and could afford more support for promising new ideas that may not otherwise be embraced by other Navy organizations. The Warfare Requirements and Programs Office was created to separate requirements and resource allocation functions that had previously been handled by a single office. The office’s responsibility for balancing warfighting requirements with available resources could also provide a better means for the Navy to assess its resource priorities and make the necessary budget trade-offs between current and future needs. The Navy is also considering establishing “mission capability packages.” Rather than focusing on individual platforms (ships, submarines, or aircraft), the packages would examine requirements in terms of all the capabilities needed to perform a specific mission. Officials at Navy headquarters and the Navy Warfare Development Command said these packages could help the Navy focus more on the capabilities it needs to clarify funding priorities. Officials at Navy headquarters and the Navy Warfare Development Command have told us that since the reorganization, the Command has begun to obtain greater acceptance from other Navy organizations, and its ties with headquarters have improved. The Navy is also considering changing the Command’s link to the fleet to provide the Command with more visibility and influence. One possibility under consideration is to place the Command under the Commander in Chief of the Atlantic Fleet. While this could increase the Command’s visibility and influence with the fleet, some Navy officials said it could also have the consequence of focusing their efforts on more near-term fleet issues over longer term transformation. While the Navy has actively conducted experimentation over the last 4 years, it has focused its experiments on near- and mid-term operational and force issues and much less on long-term issues. In spite of the importance of experimentation for transformation, the Navy has not developed a comprehensive strategy that places long-term goals and resources for experiments within the context of its overall transformation objectives and priorities. Experimentation allows the Navy to explore new operational concepts, identify alternative force mixes and capabilities, validate promising technologies and system concepts, and serve as an overall mechanism for transformation. Most importantly, it helps to shape and challenge ideas and thinking about the future. Despite the Navy’s increased experimentation effort since 1997, Navy officials at headquarters, fleet, and other organizations believe the Navy needs to expand its experimentation activities to explore major long-term operational and force concepts to provide better information on future requirements and capabilities. A wide range of Navy officials and defense experts stated that the Navy needs to explore new ship design concepts— possibly revolutionary ones—and employ prototypes to experiment with them. Such experimentation is necessary for the Navy to analyze potential force structure and operating options in the face of likely budgets and opportunities possible in emerging technologies. An example of this type of effort is the Navy’s current plan to begin at-sea experimentation with a high-speed ship concept. Resource priorities also affect the Navy’s ability to experiment and address long-term issues. The Navy has stated that operating a smaller force in a period of increased level of overseas operations has limited the number of ships it can assign to experimentation. It has worked around this limitation by conducting its experimentation, such as fleet battle experiments, as part of its major fleet exercises. Another resource issue is the limited staff available to support the Navy experimentation program. Since 1997, the Navy has conducted fleet battle experiments at the rate of two each year. In addition to drawing heavily on the staff and resources of the Navy Warfare Development Command and the fleets, the Navy believed this pace did not allow sufficient time to plan and prepare for experiments beforehand and assess the results afterward. In 2001, it changed the schedule to approximately one experiment each year. We learned that many of the Navy’s innovation activities are not well coordinated or tracked between different organizations. The Navy has been undertaking a wide range of innovation activities. Some of these activities are directed at specific problems, while others have a broader servicewide focus. Some are aimed at best business practice innovations; others are operational in nature. These activities contribute to the incremental, evolutionary approach the Navy has adopted for transformation, and if sufficiently orchestrated and sustained, they can lead to substantial change. Many Navy officials throughout the organizations we visited believed that the Navy needs to improve the servicewide coordination and tracking of innovation activities. An official at the Pacific Fleet headquarters stated that the Pacific Fleet has attempted to identify and track these innovation activities, both within the Fleet and in other parts of the Navy. However, the official said that it was not possible to determine the extent to which all activities were captured because of the large number of and differences in activities. Several Navy officials from various fleet and headquarters organizations stated that a central Navy clearinghouse for maintaining and disseminating information about ongoing and past activities would benefit, promote, and accelerate other innovation efforts. Various Navy officials suggested that the Navy Warfare Development Command would be an appropriate organization to manage and maintain this information. The Navy Warfare Development Command has proposed an effort to provide greater servicewide coordination of innovation and transformation-related activities. According to the proposal, the Navy would develop web-based tools to further enhance coordination efforts. It would also focus on coordinating innovation efforts with the other services and the U.S. Joint Forces Command. However, no decision has yet been reached by the Navy’s leadership on who will lead the coordination effort. The complexities and uncertainties that underlie the Navy’s transformation require that clear direction and guidance be given to all levels of the organization on what transformation is and how it will be carried out. While the Navy has initiated a number of activities to transform its forces, it has not articulated and promulgated a well-defined transformation program. Current activities have not been conducted within the context of an overall strategic plan and roadmap to provide the direction, goals, priorities, scope, options, and resource requirements necessary to achieve a successful transformation. The importance of such planning to effective and efficient management of federal programs is recognized under the Government Performance and Results Act of 1993. Implementing the Navy’s transformation will be complicated and will require careful consideration of near-term needs, as well as fundamental changes in the force structure, concepts, and organizations required to meet future security challenges within likely budgets. Actions need to be planned and orchestrated as part of a broader, well-developed strategy designed to achieve long-term objectives and not simply to satisfy immediate requirements. Development of a long-term strategic plan and roadmap would help to maintain the delicate balance between current and future requirements as the Navy transforms. It would also provide the necessary guidance to better focus and direct the Navy’s transformation activities and tools to guide and oversee progress toward achievement of goals and objectives. Such a plan, for example, could also address the coordination and monitoring of innovation activities and delineate the authority of the Navy Warfare Development Command in carrying out its mission. Without such a plan, it can be difficult for senior leaders, the Congress, and others to provide the necessary support and make optimal decisions on priorities and the effective use of resources to successfully transform Navy forces. Although the Navy has stated that its transformation efforts are focused on force posture and not necessarily force structure, there is a clear and persistent need for the Navy to explore potential fundamental changes in its force structure and operational concepts that would permit it to carry out its requirements within certain fiscal parameters. The time required to design and build ships further compels urgent action by the Navy. Without an experimentation effort that includes evaluating long-term issues such as new ship designs and operational concepts, the Navy will be less able to make the difficult but important decisions that will be needed regarding the size, shape, and composition of its future fleet. The wide range of innovation activities being conducted throughout the Navy contributes to the Navy’s overall transformation efforts. But the lack of adequate Navy-wide coordination and tracking limits the potential benefits these activities could have for all organizations. The creation of a Navy-wide clearinghouse would provide a central repository for all organizations—in the Navy and elsewhere in the Department of Defense— to exchange information and lessons learned on innovation activities. To more clearly determine the Navy’s direction and promote better understanding of actions taken to transform its forces for the 21st century, we recommend that the Secretary of Defense direct the Secretary of the Navy to develop a long-term strategic plan and roadmap that clearly articulates priorities, objectives, and milestones; identifies the scope, resource requirements, and responsibilities; and defines the metrics for assessing progress in achieving successful transformation. We also recommend that the Secretary of Defense direct the Secretary of the Navy to (1) adjust the Navy’s experimentation program to provide greater exploration of long-term force structure and operational issues and (2) create a clearinghouse for Navy-wide innovation activities to improve coordination and monitoring of such activities. We received written comments from the Department of Defense on a draft of this report, which are included in their entirety as appendix V. The Department agreed with our recommendations but did not elaborate on how it would address them. DOD generally believed that our findings accurately reflect the Navy’s transformation process, the current status, and the increased efforts in the Navy toward transformation. DOD agreed with our overall conclusion that the Navy needs to develop a strategic plan and roadmap to manage and execute its transformation efforts. In its comments, DOD stated that the Navy is implementing near-, mid-, and far-term steps to achieve a transformation goal of assured access, which was identified by the Navy’s 1999 Maritime Concept as a key operational challenge. We agree that these steps are an element in the development of a comprehensive long-term strategic plan and roadmap that we recommend for Navy transformation. However, such a plan and roadmap must also articulate the priorities, objectives, and milestones; identify the scope, resource requirements, and responsibilities; and define the metrics for assessing progress. By including these additional elements, the plan and roadmap would provide the clear direction, focus, and integration necessary for the Navy to carry out a successful transformation. To develop criteria for assessing the Navy’s management of its transformation, we identified several key factors important to success in military transformation (see app. IV). We identified these factors from our review of a wide range of DOD and Navy publications and statements, open literature, academic research on the subject of military innovation and transformation, and case studies of past transformation efforts. To assess the reasonableness and completeness of these factors, we discussed them with Navy and DOD officials and outside defense experts from various research and academic organizations. We also used the principles laid out in the Government Performance and Results Act of 1993 as additional benchmarks for our assessment. To determine the Navy’s transformation-related activities and develop our observations of the key management issues affecting progress, we obtained information, documents, and perspectives from officials at all levels of the Navy, including Navy headquarters, the Navy Warfare Development Command, the Naval War College, the Atlantic and Pacific Fleets, and the Offices of the Secretary of Defense and the Chairman of the Joint Chiefs of Staff. We discussed Navy transformation with the former Secretary of the Navy (1998-2001) and with several senior Navy leaders who have responsibility for various aspects of the Navy’s transformation. We also obtained perspectives from several defense experts and academicians who have followed military and Navy transformation. Appendix VI lists the principal organizations and offices where we performed work. We reviewed an extensive array of policy, planning, and guidance documents; intelligence documents; posture statements and speeches; congressional hearings and testimonies; open literature; and studies and assessments. We also made extensive use of information available on public and DOD Internet web sites. To develop a better understanding of the Navy’s transformation and the actions it has taken to carry out the transformation, we obtained information on various areas related to concept development, experimentation, innovation, research and development, and other transformation activities. We reviewed the concept of network centric warfare with Navy officials at several organizations and offices responsible for developing and implementing the concept. To ascertain the Navy’s experimentation and innovation efforts, we discussed the plans, content, and results with officials at the Navy Warfare Development Command, Atlantic and Pacific Fleets, and research and development organizations. To obtain information on the Navy’s participation in joint experimentation efforts, particularly Millennium Challenge 2002, we met with officials at the U.S. Joint Forces Command and the Joint Staff’s Joint Vision and Transformation Division. To be cognizant of the security environment in which the Navy is likely to operate its forces through 2020, we obtained an intelligence briefing from the Defense Intelligence Agency. To attain information on the Navy’s investment in research and development to support transformation, we met with officials at the Office of Naval Research, the Space and Naval Warfare Systems Command, and the Defense Advanced Research Projects Agency. Although we did not include a review of Marine Corps transformation activities in our review, we did meet with a senior Marine Corps official responsible for the service’s transformation to discuss coordination and joint transformation- related efforts between the two services. We did not include the Navy’s management of service Defense Reform Initiatives in our scope. Our review was conducted from August 2000 through May 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees, the Secretary of Defense, the Secretary of the Navy, the Chairman of the Joint Chiefs of Staff, and the Chief of Naval Operations. We will also make copies available to others upon request. Please contact me at (202) 512-3958 if you or your staff have any questions concerning this report. Major contributors to this report were Marvin E. Casterline, Mark J. Wielgoszynski, Joseph W. Kirschbaum, and Stefano Petrucci. To more sharply focus on the capabilities the Navy will need in the next 10 to 15 years, in 1999 the Navy reorganized its science and technology resources into 12 future naval capabilities. The objective is to focus on capabilities and not platforms. The future naval capabilities are managed by integrated product teams, which include senior Navy and Marine Corps military and civilian officials. These teams focus on the overall capability by prioritizing the individual efforts and supporting technology areas. Table 1 lists the 12 future naval capabilities and provides examples of individual technology efforts for each capability. Since March 1997 the Navy has conducted nine fleet battle experiments. Each of these experiments has focused on some of the Navy’s core missions, such as land attack, or those it expects to conduct in the future. These experiments have also enabled the Navy to assess how new technologies and approaches could enhance fleet capabilities and operations with joint and allied forces. The experiments rotate among the Navy’s fleets and are scheduled to coincide with a major fleet exercise. Roughly $5 million is dedicated to each fleet battle experiment. This amount does not include the operation and maintenance funds expended by a fleet during the actual experiment. Upon completion, each experiment is assessed to determine which concepts proved workable and what follow-on experimentation should be pursued. Table 2 provides some examples of issues addressed in the fleet battle experiments. A wide range of innovations and transformation-related activities are being conducted at the fleet level and in many other Navy organizations. For example, the Second Fleet has been evaluating the concepts, technologies, and procedures for network centric antisubmarine warfare. This concept employs collaborative tools to link ships and aircraft to greatly increase the effectiveness of antisubmarine forces. It assists the Navy in implementing its plan to distribute antisubmarine warfare capability throughout its forces rather than in only a few dedicated platforms. Table 3 provides examples of Navy innovation activities. A number of factors are important for the Navy or any military organization to successfully transform its forces and operations. On their own or in combination, these eight factors are useful in establishing effective planning mechanisms for managing transformation efforts. We identified these factors from our review of a wide range of Department of Defense (DOD) and Navy publications and statements, open literature, academic research on the subject of military innovation and transformation, and case studies of past transformation efforts. To assess the reasonableness and completeness of these factors, we discussed these factors with Navy and DOD officials and outside defense experts from various research and academic organizations. A clear and authoritative statement of vision, rationale, and direction of transformation efforts is necessary. The precise shape and structure of the future Navy is difficult to determine. But the direction of development for required capabilities can be outlined to the extent that lines of effort can be delineated, priorities established, and responsibilities for executing them assigned. The Navy’s leadership must ensure that such policies are communicated throughout its organization. This factor involves the details of transformation and how an organization should carry them out. This entails a delineation of organizational elements responsible for converting concepts and ideas into practical operational and force structure changes. It is important that personnel and funds are dedicated to innovation and transformation-related efforts. These efforts include experimentation, prototype development, and acquisition. For example, the period of the 1920s and 1930s was one of fiscal constraint for the Navy. But it devoted considerable resources to the development of aircraft carriers and naval aviation, which later contributed to the Navy’s success during the Second World War. Clear and adaptable measures of effectiveness are required for experiments to determine the value of innovations and for procedural matters to determine the progress of transformation. Innovation and transformation must include changes in how the Navy operates at all levels. There must be feedback among innovators, operators, experimenters, doctrine writers, and the training and education establishment. Many defense experts have recognized this linkage as one of the most important elements of military transformation. This is the “culture” aspect of transformation. Leaders from all levels of the organization should provide tangible commitment to Navy transformation and to those who make contributions to that end. Innovators must be given incentives to innovate, allowed to take reasonable risks in areas such as experiments, and given the authority to conduct energetic analyses to address the Navy’s future warfare challenges. The active support of the Congress is vital to effecting transformation in the Navy. In some cases, this may be resource-oriented. In others, such support would involve congressional oversight, as it has in the past, and provide incentives and direction when and where appropriate. For example, during the development of naval aviation, the Congress mandated that those officers seeking to command the new aircraft carriers had to be flight qualified. This mandate stimulated the career path. To better ensure an effective transformation, the Navy needs to coordinate its plans and efforts with the Congress as well as the other services and joint organizations. Individual Navy efforts must be interoperable with the other services in order for future joint operations to be viable. This is applicable to the specifications of individual capabilities, such as communication equipment, as well as to the broader issue of developing integrated operational level capabilities and concepts. U.S. Third Fleet (U.S.S. Coronado) | With the end of the Cold War, national security strategies changed to meet new global challenges. The Navy developed a new strategic direction in the early 1990s, shifting its primary focus from open ocean "blue water" operations to littoral, or shallow water, operations closer to shore. GAO found that although the Navy has recently placed more emphasis on transformation, it does not have a well-defined and overarching strategy for transformation. It has not clearly identified the scope and direction of its transformation; the overall goals, objectives, and milestones; or the specific strategies and resources to be used in achieving these goals. It also has not clearly identified organizational roles and responsibilities, priorities, resources, or ways to measure progress. Without a well-defined strategic plan to guide the Navy's efforts, senior leaders and Congress will not have the tools they need to ensure that the transformation is successful. |
Following the 2000 national elections, we performed a comprehensive series of reviews covering our nation’s election process, in which we identified a number of challenges. These reviews culminated in a capping report that summarized this work and provided the Congress with a framework for considering options for election administration reform. Our reports and framework were among the resources that the Congress drew on in enacting the Help America Vote Act (HAVA) of 2002, which provided guidance for fundamental election administration reform. Among other things, the act authorizes $3.86 billion in funding over several fiscal years for programs to replace punch card and mechanical lever voting equipment, improve election administration, improve accessibility, train poll workers, and perform research and pilot studies. It also created the EAC to oversee the election administration reform process. Since the act’s passage, a number of voting jurisdictions have replaced their older voting equipment with direct recording electronic systems. At the same time, concerns have been raised about the use of these systems; some have reported that these systems have serious security vulnerabilities and that the embedded controls are not sufficient to ensure the integrity of the election process. The EAC, which began operations in January 2004, held a public hearing in May 2004 at which a major topic was the security and reliability of electronic voting devices. At the request of congressional leaders, committees, and members, we conducted an extensive body of work in the wake of the 2000 elections, which culminated in seven reports addressing a range of election-related topics. First, we reviewed the constitutional framework for the administration of elections, as well as major federal statutes enacted in this area. We reported that the constitutional framework for elections includes both state and federal roles. States are responsible for the administration of both their own elections and federal elections, but the Congress has enacted laws in several major areas of the voting process, including the timing of federal elections, voter registration, and absentee voting requirements. Congressional authority to legislate in this area derives from various constitutional sources, depending upon the type of election. For federal elections, the Congress has constitutional authority over both congressional and presidential elections. Second, we examined voting assistance for military and overseas voters. We reported that although tools are available for such voters, many potential voters were unaware of them, and many military and overseas voters believed it was challenging to understand and comply with state requirements and local procedures for absentee voting. In addition, although information was not readily available on the precise number of military and overseas absentee votes that were disqualified in the 2000 general election and the reasons for disqualification, we found through a national telephone survey that almost two-thirds of the disqualified absentee ballots were rejected because of lateness or errors in completion of the envelope or form accompanying the ballot. We recommended that the Secretaries of Defense and State improve (1) the clarity and completeness of service guidance, (2) voter education and outreach programs, (3) oversight and evaluation of voting assistance efforts, and (4) sharing of best practices. The Departments of Defense and State agreed with our overall findings and recommendations, and as of May 2004, the recommendations had largely been implemented. Third, we investigated whether minorities and disadvantaged voters were more likely to have their votes not counted because the voting method they used was less reliable than that of affluent white voters. According to our results, the state in which counties were located had more effect on the number of uncounted presidential votes than did counties’ demographic characteristics or voting method. State differences accounted for 26 percent of the total variation in uncounted presidential votes across counties. County demographic characteristics accounted for 16 percent of the variation (counties with higher percentages of minority residents tended to have higher percentages of uncounted presidential votes, while counties with higher percentages of younger and more educated residents tended to have lower percentages of uncounted presidential votes), and voting equipment accounted for 2 percent of the variation. Fourth, in a review of voting accessibility for voters with disabilities, we found that all states had provisions addressing voting by people with disabilities, but these provisions varied greatly. Federal law requires that voters with disabilities have access to polling places for federal elections, with some exceptions. All states provided for one or more alternative voting methods or accommodations intended to facilitate voting by people with disabilities. In addition, states and localities had made several efforts to improve voting accessibility for voters with disabilities, such as modifying polling places, acquiring new voting equipment, and expanding voting options, but state and county election officials surveyed cited various challenges to improving access. We concluded that given the limited availability of accessible polling places, other options that could allow more voters with disabilities to vote at a polling place on election day include reassigning them to other, more accessible polling places or creating accessible superprecincts in which voters from more than one precinct could all vote in the same building. Fifth, we reported on the status and use of voting equipment standards developed by the Federal Election Commission (FEC). These standards define minimum functional and performance requirements, as well as minimum life-cycle management processes for voting equipment developers to follow, such as quality assurance. At the time of our review, no federal agency had explicit statutory responsibility for developing the standards; however, the FEC developed voluntary standards for computer-based systems in 1990, and the Congress provided funding for this effort. Similarly, no federal agency was responsible for testing voting systems against the federal standards. Instead, the National Association of State Election Directors accredited independent test authorities to test voting systems against the standards. We noted, however, that the FEC standards had not been updated since 1990 and were consequently out of date. We suggested that the Congress consider assigning explicit federal authority, responsibility, and accountability for the standards, including their proactive and continuous update and maintenance; we also suggested that the Congress consider what, if any, federal role is appropriate regarding implementation of the standards, including the accreditation of independent test authorities and the qualification of voting systems. Both of these matters were addressed in the Help America Vote Act, which, among other things, set up the EAC to take responsibility for voluntary voting system guidelines. We also made recommendations to the FEC aimed at improving the guidelines. Before the EAC became operational, the FEC continued to update and maintain the guidelines, issuing a new version in 2002. Sixth, we issued a report on election activities and challenges across the nation. In this report, we described the operations and challenges associated with each stage of the election process, including voter registration; absentee and early voting; election day administration; and vote counts, certification, and recounts. The report also provided analyses on issues associated with voting systems that were used in the November 2000 elections and the potential use of the Internet for voting. Among other things, we pointed out that each of the major stages of an election depends on the effective interaction of people (the election officials and voters), processes (or internal controls), and technology (registration systems, election management systems, and voting systems). We also enumerated the challenges facing election officials at all stages of the election process. Finally, we issued a capping report that included a framework for evaluating election administration reform proposals. Among other things, we observed that the constitutional and operational division of federal and state authority to conduct elections had resulted in great variability in the ways that elections are administered in the United States. We concluded that given the diversity and decentralized nature of election administration, careful consideration needed to be given to the degree of flexibility and the planned time frames for implementing new initiatives. We also concluded that in order for election administration reform to be effective, reform proposals must address all major parts of our election system—its people, processes, and technology—which are interconnected and significantly affect the election process. And finally, we provided an analytical framework for the Congress to consider in deciding on changes to the overall election process. Enacted by the Congress in October 2002, the Help America Vote Act of 2002 addressed a range of election issues, including the lack of explicit federal (statutory) responsibility for developing and maintaining standards for electronic voting systems and for testing voting systems against standards. With the far-reaching goal of improving the election process in every state, the act affects nearly every aspect of the voting process, from voting technology to provisional ballots, and from voter registration to poll worker training. In particular, the act established a program to provide funds to states to replace punch card and lever machine voting equipment, established the EAC to assist in the administration of federal elections and provide assistance with the administration of certain federal election laws and programs, and established minimum election administration standards for the states and units of local government that are responsible for the administration of federal elections. In January 2004, the Congressional Research Service reported that disbursements to states for the replacement of older equipment and election administration improvements totaled $649.5 million. The act specifically tasked the EAC to serve as a national clearinghouse and resource for compiling election information and reviewing election procedures; for example, it is to conduct periodic studies of election administration issues to promote methods of voting and administration that are most convenient, accessible, and easy to use for all voters. Other examples of EAC responsibilities include ● developing and adopting voluntary voting system guidelines, and maintaining information on the experiences of states in implementing the guidelines and operating voting systems; ● testing, certifying, decertifying, and recertifying voting system hardware and software through accredited laboratories; ● making payments to states to help them improve elections in the areas of voting systems standards, provisional voting and voting information requirements, and computerized statewide voter registration lists; and ● making grants for research on voting technology improvements. According to the act, reporting to the EAC will be the Technical Guidelines Development Committee, which will make recommendations on voluntary voting system guidelines. The National Institute of Standards and Technology (NIST) will provide technical support to the development committee, and the NIST Director will serve as its chairman. In December 2003, the EAC commissioners were appointed, and the EAC began operations in January 2004. According to the commission chairman, the EAC’s fiscal year 2004 budget is $1.2 million, and its near-term plans focus on complying with requirements established in HAVA. In that regard, the EAC issued its first annual report to the Congress in April of this year on the status of election administration reform. The EAC also plans to issue best practices guidelines in July 2004 to increase the reliability of voting equipment and systems for the November 2004 elections. The guidelines also include guidance on recruiting and training poll workers. The commission’s longer term plans include updating the voluntary voting system guidelines and improving the process for independent testing of voting systems. Toward this end, the EAC’s Technical Guidelines Development Committee recently held its first meeting to develop a plan to update voluntary voting system guidelines. According to some commissioners, current operations are constrained by a lack of persons in key staff positions, including the Executive Director, General Counsel, and Inspector General. In the United States today, most votes are cast and counted by one of two types of electronic voting systems: optical scan and direct recording electronic (DRE). Two older voting technologies were also used in the 2000 elections: punch card equipment (used by about 31 percent of registered voters in 2000 and expected to be used by 19 percent in 2004) and mechanical lever voting machines (used by about 17 percent of registered voters in 2000 and expected to be 13 percent in 2004). These equipment types are being replaced as required by provisions established in HAVA. In addition, for a small minority of registered voters, votes are cast and counted manually on paper ballots. Optical scan voting systems use electronic technology to tabulate paper ballots. Although optical scan technology has been in use for decades for such tasks as scoring standardized tests, it was not applied to voting until the 1980s. In 2000, about 31 percent of registered voters voted on optical scan systems. In the 2004 election, according to Election Data Services, Inc., about 32 percent of registered voters will use optical scan voting equipment. For voting, an optical scan system is made up of computer-readable ballots, appropriate marking devices, privacy booths, and a computerized tabulation device. The ballot, which can be of various sizes, lists the names of the candidates and the issues. Voters record their choices using an appropriate writing instrument to fill in boxes or ovals, or to complete an arrow next to the candidate’s name or the issue. The ballot includes a space for write-ins to be placed directly on the ballot. Optical scan ballots are tabulated by optical-mark-recognition equipment (see fig. 1), which counts the ballots by sensing or reading the marks on the ballot. Ballots can be counted at the polling place—this is referred to as precinct-count optical scan—or at a central location. If ballots are counted at the polling place, voters or election officials put the ballots into the tabulation equipment, which tallies the votes; these tallies can be captured in removable storage media that are transported to a central tally location, or they can be electronically transmitted from the polling place to the central tally location. If ballots are centrally counted, voters drop ballots into sealed boxes, and election officials transfer the sealed boxes to the central location after the polls close, where election officials run the ballots through the tabulation equipment. Software instructs the tabulation equipment to assign each vote (i.e., to assign valid marks on the ballot to the proper candidate or issue). In addition to identifying the particular contests and candidates, the software can be configured to capture, for example, straight party voting and vote-for-no-more-than-N contests. Precinct-based optical scanners can also be programmed to detect overvotes (where the voter votes for two candidates for one office, for example, invalidating the vote) and undervotes (where the voter does not vote for all contests or issues on the ballot) and to take some action in response (rejecting the ballot, for instance). In addition, optical scan systems often use vote-tally software to tally the vote totals from one or more vote tabulation devices. If election officials program precinct-based optical scan systems to detect and reject overvotes and undervotes, voters can fix their mistakes before leaving the polling place. However, if voters are unwilling or unable to correct their ballots, a poll worker can manually override the program and accept the ballot, even though it has been overvoted or undervoted. If ballots are tabulated centrally, voters do not have the opportunity to correct mistakes that may have been made. First introduced in the 1970s, DREs capture votes electronically, without the use of paper ballots. In the 2000 election, about 12 percent of voters used this type of technology. In the 2004 election, according to Election Data Services, Inc., about 29 percent of registered voters will use this voting technology. DREs come in two basic types, pushbutton or touchscreen, the pushbutton being the older technology; during the 2000 elections, pushbutton DREs were the most prevalent of the two types. The two types vary considerably in appearance (see fig. 2). Pushbutton DREs are larger and heavier than touchscreens. Pushbutton and touchscreen units also differ significantly in the way they present ballots to the voter. With the pushbutton, all ballot information is presented on a single “full-face” ballot. For example, a ballot may have 50 buttons on a 3 by 3 foot ballot, with a candidate or issue next to each button. In contrast, touchscreen DREs display the ballot information on an electronic display screen. For both pushbutton and touchscreen types, the ballot information is programmed onto an electronic storage medium, which is then uploaded to the machine. For touchscreens, ballot information can be displayed in color and can incorporate pictures of the candidates. Because the ballot space on a touchscreen is much smaller than on a pushbutton machine, voters who use touchscreens must page through the ballot information. Both touchscreen and pushbutton DREs can accommodate multilingual ballots. Despite the differences, the two types have some similarities, such as how the voter interacts with the voting equipment. For pushbuttons, voters press a button next to the candidate or issue, which then lights up to indicate the selection. Similarly, voters using touchscreens make their selections by touching the screen next to the candidate or issue, which is then highlighted. When voters are finished making their selections on a touchscreen or a pushbutton DRE, they cast their votes by pressing a final “vote” button or screen. Until they hit this final button or screen, voters can change their selections. Both types allow voters to write in candidates. While most DREs allow voters to type write-ins on a keyboard, some pushbutton types require voters to write the name on paper tape that is part of the device. Although DREs do not use paper ballots, they do retain permanent electronic images of all the ballots, which can be stored on various media, including internal hard disk drives, flash cards, or memory cartridges. According to vendors, these ballot images, which can be printed, can be used for auditing and recounts. Some of the newer DREs use smart card technology as a security feature. Smart cards are plastic devices—about the size of a credit card—that use integrated circuit chips to store and process data, much like a computer. Smart cards are generally used as a means to open polls and to authorize voter access to ballots. For instance, smart cards on some DREs store program data on the election and are used to help set up the equipment; during setup, election workers verify that the card received is for the proper election. Other DREs are programmed to automatically activate when the voter inserts a smart card; the card brings up the correct ballot onto the screen. In general, the interface with the voter is very similar to that of an automatic teller machine. Like optical scan devices, DREs require the use of software to program the various ballot styles and tally the votes, which is generally done through the use of memory cartridges or other media. The software is used to generate ballots for each precinct within the voting jurisdiction, which includes defining the ballot layout, identifying the contests in each precinct, and assigning candidates to contests. The software is also used to configure any special options, such as straight party voting and vote-for-no-more- than-N contests. In addition, for pushbutton types, the software assigns the buttons to particular candidates and, for touchscreens, the software defines the size and location on the screen where the voter makes the selection. Vote-tally software is often used to tally the vote totals from one or more units. DREs offer various configurations for tallying the votes. Some contain removable storage media that can be taken from the voting device and transported to a central location to be tallied. Others can be configured to electronically transmit the vote totals from the polling place to a central tally location. DREs are designed not to allow overvotes; for example, if a voter selects a second choice in a two-way race, the first choice is deselected. In addition to this standard feature, different types offer a variety of options, including many aimed at voters with disabilities, that jurisdictions may choose to purchase. In our 2001 work, we cited the following features as being offered in some models of DRE: ● A “no-vote” option. This option helps avoid unintentional undervotes. This provides the voter with the option to select “no vote (or abstain)” on the display screen if the voter does not want to vote on a particular contest or issue. ● A “review” feature. This feature requires voters to review each page of the ballot before pressing the button to cast the vote. ● Visual enhancements. Visual enhancements include color highlighting of ballot choices, candidate pictures, etc. ● Accommodations for voters with disabilities. Examples of options for voters who are blind include Braille keyboards and audio interfaces. At least one vendor reported that its DRE accommodates voters with neurological disabilities by offering head movement switches and “sip and puff” plug-ins. Another option is voice recognition capability, which allows voters to make selections orally. ● An option to recover spoiled ballots. This feature allows voters to recast their votes after their original ballots are cast. For this option, every DRE at the poll site would be connected to a local area network. A poll official would void the original “spoiled” ballot through the administrative workstation that is also connected to the local area network. The voter could then cast another ballot. ● An option to provide printed receipts. In this case, the voter would receive a paper printout or ballot when the vote is cast. This feature is intended to provide voters and/or election officials with an opportunity to check what is printed against what is recorded and displayed. It is envisioned that procedures would be in place to retrieve the paper receipts from the voters so that they could not be used for vote selling. Some DREs also have an infrared “presence sensor” that is used to control the receipt printer in the event the voter is allowed to keep the paper receipt; if the voter leaves without taking the receipt, the receipt is pulled back into the printer. As older voting equipment has been replaced with newer electronic voting systems over the last 2 years, the debate has shifted from hanging chads and butterfly ballots to vulnerabilities associated with DREs. Problems with these devices in recent elections have arisen in various states. For example: ● Six DRE units used in two North Carolina counties lost 436 ballots cast in early voting for the 2002 general election because of a software problem, according to a February 9, 2004, report in Wired News. The manufacturer said that problems with the firmware of its touchscreen machines led to the lost ballots. The state was trying out the machines in early voting to determine if it wanted to switch from the optical scan machines it already owned to the new touchscreen systems. ● According to a January 2004 report in Wired News, blank ballots were recorded for 134 voters who signed in and cast ballots in Broward County, Florida. These votes represented about 1.3 percent of the more than 10,000 people who voted in the race for a state house representative. ● USA Today reported that four California counties suffered from problems with DREs in a March 2004 election, including miscounted ballots, delayed polling place openings, and incorrect ballots. In San Diego County, about one-third of the county’s polling places did not open on time because of battery problems caused by a faulty power switch. Additionally, serious questions are being raised about the security of DREs. Some state that their use could compromise the integrity of the election process and that these devices need auditing mechanisms, such as receipt printers that would provide a paper audit trail and allow voters to confirm their choices. Among these critics are computer scientists, citizens groups, and legislators. For example, computer scientists from Johns Hopkins and Rice Universities released a security analysis of software from a DRE of a major vendor, concluding that the code had serious security flaws that could permit tampering. Other computer scientists, while agreeing that the code contained security flaws, criticized the study for not recognizing how standard election procedures can mitigate these weaknesses. Following the Johns Hopkins and Rice study, the State of Maryland contracted with both SAIC and RABA Technologies to study the same DRE equipment. The SAIC study found that the equipment, as implemented in Maryland, poses a security risk. Similarly, RABA identified vulnerabilities associated with the equipment. An earlier Caltech/MIT study noted that despite security strengths of the election process in the United States , current trends in electronic voting are weakening those strengths and introducing risks; according to this study, properly designed and implemented electronic voting systems could actually improve, rather than diminish, security. Citizen advocacy groups are also taking action. For example, according to an April 21, 2004, press release from the Campaign for Verifiable Voting in Maryland, the group filed a lawsuit against the Maryland State Board of Elections to force election officials to decertify the DRE machines used in Maryland until the manufacturer remedies security vulnerabilities and institutes a paper audit trail. Legislators and other officials are also responding to the issues. In at least 20 states, according to the Associated Press, legislation has been introduced requiring a paper record of every vote cast. Following the problems in California described above, the California Secretary of State banned the use of one model of touchscreen DREs and conditionally decertified other similar models. According to the New York Times, these models represented 14,000 and 28,000 units, respectively. The Secretary recommended that the state Attorney General consider taking civil and criminal action against the manufacturer for “fraudulent actions.” The decision followed the recommendations of the state’s Voting Systems and Procedures Panel, which urged the Secretary of State to prohibit the four counties that experienced difficulties from using their touchscreen units in the November 2004 election. The panel reported that the manufacturer did not obtain federal approval of the model used in the four affected counties and installed software that had not been approved by the Secretary of State. It also noted that problems with the systems prevented an unspecified number of voters from casting ballots. In addition, two California state senators drafted a bill to prohibit the use of any DRE voting system without a paper trail in the 2004 general election; they planned to introduce the bill if the Secretary of State did not act. In June 2004, the Secretary of State proposed standards for the creation and testing of paper trails for electronic voting systems. At the federal level, several bills have been introduced in response to concerns about electronic voting technology. One of the bills, the Voter Confidence and Increased Accessibility Act of 2003 (H.R. 2239), if enacted, would require that voting machines used in elections for federal office produce paper audit trails so that voters and election officials can check accuracy. Among other provisions, the bill would also ban the use of undisclosed software and wireless communications devices in voting systems. Some of the concerns regarding DREs were raised at a public hearing held by the EAC on May 5, 2004. The purpose of the hearing was to permit the EAC to receive information on the use, security, and reliability of electronic voting devices. It included panels of technology and standards experts, vendors of voting systems, state election administrators, and citizen advocacy groups. One expert testified that electronic voting systems are flawed because they do not permit voters to verify that their votes were recorded correctly and they do not permit a public vote count. Others stated that the systems can be made secure only by the addition of a voter- verifiable paper ballot. On the other hand, the election administrators on the panel described positive experiences with DREs, and representatives of voters with disabilities supported the use of DREs because of their accessibility features. Electronic voting systems represent one of many important components in the overall election process. This process is made up of several stages, with each stage consisting of key people, process, and technology variables. Many levels of government are involved, including over 10,000 jurisdictions with widely varying characteristics. In the U.S. election process, all levels of government share responsibility. At the federal level, the Congress has authority under the Constitution to regulate presidential and congressional elections and to enforce prohibitions against specific discriminatory practices in all elections—federal, state, and local. It has passed legislation affecting the administration of state elections that addresses voter registration, absentee voting, accessibility provisions for the elderly and handicapped, and prohibitions against discriminatory practices. The Congress does not have general constitutional authority over the administration of state and local elections. At the state level, the states are responsible for the administration of both their own elections and federal elections. States regulate the election process, including, for example, adoption of voluntary voting system guidelines, testing of voting systems, ballot access, registration procedures, absentee voting requirements, establishment of voting places, provision of election day workers, and counting and certification of the vote. In fact, the U.S. election process can be seen as an assemblage of 51 somewhat distinct election systems—those of the 50 states and the District of Columbia. Further, although election policy and procedures are legislated primarily at the state level, states typically have decentralized this process so that the details of administering elections are carried out at the city or county levels, and voting is done at the local level. As we reported in 2001, local election jurisdictions number more than 10,000, and their size varies enormously—from a rural county with about 200 voters to a large urban county such as Los Angeles County, where the total number of registered voters for the 2000 elections exceeded the registered voter totals in 41 states. The size of a voting jurisdiction significantly affects the complexity of planning and conducting the election, as well as the method used to cast and count votes. In our 2001 work, we quoted the chief election official in a very large voting jurisdiction: “the logistics of preparing and delivering voting supplies and equipment to the county’s 4,963 voting precincts, recruiting and training 25,000 election day poll workers, preparing and mailing tens of thousands of absentee ballot packets daily and later signature verifying, opening and sorting 521,180 absentee ballots, and finally, counting 2.7 million ballots is extremely challenging.” The specific nature of these challenges is affected by the voting technology that the jurisdiction uses. For example, jurisdictions using DRE systems may need to manage the electronic transmission of votes or vote counts; jurisdictions using optical scan technology need to manage the paper ballots that this technology reads and tabulates. Jurisdictions using optical scan technology may also need to manage electronic transmissions if votes are counted at various locations and totals are electronically transmitted to a central tally point. Another variable is the diversity of languages within a jurisdiction. In November 2000, Los Angeles County, for instance, provided ballots in Spanish, Chinese, Korean, Vietnamese, Japanese, and Tagalog, as well as English. No matter what technology is used, jurisdictions may need to provide ballot translations; however, the logistics of printing paper materials in a range of languages, as would be required for optical scan technology, is different from the logistics of programming translations into DRE units. Some states do have statewide election systems so that every voting jurisdiction uses similar processes and equipment, but others do not. For instance, we reported in 2001 that in Pennsylvania, local election officials told us that there were 67 counties and consequently 67 different ways of handling elections. In some states, state law prescribes the use of common voting technology throughout the state, while in other states local election officials generally choose the voting technology to be used in their precincts, often from a list of state-certified options. Whatever the jurisdiction and its specific characteristics, administering an election is a year-round activity, involving varying sets of people to carry out processes at different stages. These stages generally consist of the following: ● Voter registration. Among other things, local election officials register eligible voters and maintain voter registration lists, including updates to registrants’ information and deletions of the names of registrants who are no longer eligible to vote. ● Absentee and early voting. This type of voting allows eligible persons to vote in person or by mail before election day. Election officials must design ballots and other systems to permit this type of voting, as well as educating voters on how to vote by these methods. ● The conduct of an election. Election administration includes preparation before election day, such as local election officials arranging for polling places, recruiting and training poll workers, designing ballots, and preparing and testing voting equipment for use in casting and tabulating votes, as well as election day activities, such as opening and closing polling places and assisting voters to cast votes. ● Vote counting. At this stage, election officials tabulate the cast ballots; determine whether and how to count ballots that cannot be read by the vote counting equipment; certify the final vote counts; and perform recounts, if required. As shown in figure 3, each stage of an election involves people, processes, and technology. Electronic voting systems are primarily involved in the last two stages, during which votes are cast and counted. However, the type of system that a jurisdiction uses may affect earlier stages. For example, in a jurisdiction that uses optical scan systems, paper ballots like those used on election day may be mailed in the absentee voting stage. On the other hand, a jurisdiction that uses DRE technology would have to make a different provision for absentee voting. Although the current debate concerning electronic voting systems primarily relates to security, other factors affecting election administration are also relevant in evaluating these systems. Ensuring the security of elections is essential to public confidence and election integrity, but officials choosing a voting system must also consider other performance factors, such as accuracy, ease of use, and efficiency, as well as cost. Accuracy refers to how frequently the equipment completely and correctly records and counts votes; ease of use refers to how understandable and accessible the equipment is to a diverse group of voters and to election workers; and efficiency refers to how quickly a given vote can be cast and counted. Finally, equipment’s life-cycle cost versus benefits is an overriding practical consideration. In conducting elections, officials must be able to assure the public that the confidentiality of the ballot is maintained and fraud prevented. In providing this assurance, the people, processes, and technology involved in the election system all play a role: the security procedures and practices that jurisdictions implement, the security awareness and training of the election workers who execute them, and the security features provided by the systems. Election officials are responsible for establishing and managing privacy and security procedures to protect against threats to the integrity of elections. These security threats include potential modification or loss of electronic voting data; loss, theft, or modification of physical ballots; and unauthorized access to software and electronic equipment. Physical access controls are required for securing voting equipment, vote tabulation equipment, and ballots; software access controls (such as passwords and firewalls) are required to limit the number of people who can access and operate voting devices, election management software, and vote tabulation software. In addition, election processes are designed to ensure privacy by protecting the confidentiality of the vote: physical screens are used around voting stations, and poll workers are present to prevent voters from being watched or coerced while voting. Examples of security controls that are embedded in the technology include the following: ● Access controls. Election workers may have to enter user names and passwords to access voting systems and software, so that only authorized users can make modifications. On election day, voters may need to provide a smart card or token to DRE units. ● Encryption. To protect the confidentiality of the vote, DREs use encryption technology to scramble the votes cast so that the votes are not stored in the same order in which they were cast. In addition, if vote totals are electronically transmitted, encryption is used to protect the vote count from compromise by scrambling it before it is transmitted over telephone wires and unscrambling it once it is received. ● Physical controls. Hardware locks and seals protect against unauthorized access to the voting device once it has been prepared for the election (e.g., once the vote counter is reset, the unit is tested, and ballots are prepared). ● Audit trails. Audit trails provide documentary evidence to recreate election day activity, such as the number of ballots cast (by each ballot configuration or type) and candidate vote totals for each contest. Audit trails are used for verification purposes, particularly in the event that a recount is demanded. With optical scan systems, the paper ballots provide an audit trail. Since not all DREs provide a paper record of the votes, election officials may rely on the information that is collected by the DRE’s electronic memory. Part of the debate over the assurance of integrity that DREs provide revolves around the reliability of this information. ● Redundant storage. Redundant storage media in DREs provide backup storage of votes cast or vote counts to facilitate recovery of voter data in the event of power or system failure. The particular features offered by DRE and optical scan equipment differ by vendor make and model as well as the nature of the technology. DREs generally offer most of the features, but there is debate about the implementation of these features and the adequacy of the access controls and audit trails that this technology provides. If DREs use tokens or smart cards to authenticate voters, these tokens must also be physically protected and may require software security protection. For optical scan systems, redundant storage media may not be required, but software and physical access controls may be associated with tabulation equipment and software, and if vote tallies are transmitted electronically, encryption may also be used. In addition, since these systems use paper ballots, the audit trail is clearer, but physical access to ballots after they are cast must be controlled. The physical and process controls used to protect paper ballots include ballot boxes as well as the procedures implemented to protect the boxes if they need to be transported, to tabulate ballots, and to store counted ballots for later auditing and possible recounts. Ensuring that votes are accurately recorded and tallied is an essential attribute of any voting equipment. Without such assurance, both voter confidence in the election and the integrity and legitimacy of the outcome of the election are at risk. The importance of an accurate vote count increases with the closeness of the election. Both optical scan and DRE systems are claimed to be highly accurate. In 2001, our vendor survey showed virtually no differences in vendor representations of the accuracy of DRE and optical scan voting equipment, measured in terms of how accurately the equipment counted recorded votes. Vendors of optical scan equipment reported accuracy rates of between 99 and 100 percent, with vendors of DREs reporting 100 percent accuracy. As we reported in 2001, although 96 percent of local election jurisdictions were satisfied with the performance of their voting equipment during the 2000 election, according to our mail survey, only about 48 percent of jurisdictions nationwide collected data on the accuracy of their voting equipment for the election. Further, it was unclear whether jurisdictions actually had meaningful performance data. Of those local election jurisdictions that we visited that stated that their voting equipment was 100 percent accurate, none was able to provide actual data to substantiate these statements. Similarly, according to our mail survey, only about 51 percent of jurisdictions collected data on undervotes, and about 47 percent collected data on overvotes for the November 2000 election. Although voting equipment may be designed to count votes as recorded with 100 percent accuracy, how frequently the equipment counts votes as intended by voters is a function not only of equipment design, but also of the interaction of people and processes. These people and process factors include whether, for example, technicians have followed proper procedures in testing and maintaining the system, ● voters followed proper procedures when using the system, ● election officials have provided voters with understandable procedures to follow, and ● poll workers properly instructed and guided voters. As indicated earlier, various kinds of errors can lead to voter intentions not being captured when ballots are counted. Avoiding or compensating for these errors may involve solutions based on technology, processes, or both. For example, DREs are designed to prevent overvoting; however, overvoting can also be prevented by a procedure to check optical scan ballots for overvotes before the voter leaves the polls, which can be accomplished by a precinct- based tabulator or by other means. Like accuracy, ease of use (or user friendliness) largely depends on how voters interact with the voting system, physically and intellectually. This interaction, commonly referred to as the human/machine interface, is a function of the system design, the processes established for its use, and user education and training. Among other things, how well jurisdictions design ballots and educate voters on the use of voting equipment affects how easy voters find the system to use. In the 2000 elections, for example, ballots for some optical scan systems were printed on both sides, so that some voters failed to vote one of the sides. This risk could be mitigated by clear ballot design and by explicit instructions, whether provided by poll workers or voter education materials. Thus, ease of use affects accuracy (i.e., whether the voter’s intent is captured), and it can also affect the efficiency of the voting process (confused voters take longer to vote). Accessibility to diverse types of voters, including those with disabilities, is a further aspect of ease of use. As described earlier, DREs offer more options for voters with disabilities, as they can be equipped with a number of aids to voters with disabilities. However, these options increase the expense of the units, and not all jurisdictions are likely to opt for them. Instead of technological solutions, jurisdictions may establish special processes for voters with disabilities, such as allowing them to be assisted to cast their votes; this workaround can, however, affect the confidentiality of the vote. Efficiency—the speed of casting and tallying votes—is an important consideration for jurisdictions not only because it influences voter waiting time and thus potentially voter turnout, but also because it affects the number of voting systems that a jurisdiction needs to acquire and maintain, and thus the cost. Efficiency can be measured in terms of the number of people that the equipment can accommodate within a given time, how quickly the equipment can count votes, and the length of time that voters need to wait. With DREs, the vote casting and counting functions are virtually inseparable, because the ballot is embedded in the voting equipment. Accordingly, for DREs efficiency is generally measured in terms of the number of voters that each machine accommodates on election day. In 2001, vendors reported that the number of voters accommodated per DRE ranges from 200 to 1,000 voters per system per election day. With optical scan systems, in contrast, vote casting and counting are separate activities, since the ballot is a separate medium—a sheet of paper or a computer card—which once completed is put into the vote tabulator. As a result, the efficiency of optical scan equipment is generally measured in terms of the speed of count (i.e., how quickly the equipment counts the votes on completed ballots). Complicating this measurement is the fact that efficiency differs depending on whether central-count or precinct-based tabulators are used. Central-count equipment generally counts more ballots per hour because it is used to count the ballots for an entire jurisdiction, rather than an individual polling site. For central-count optical scan equipment, 10 vendors reported speed of count ranges from 9,000 to 24,000 ballots per hour. For precinct-count optical scan equipment, vendors generally did not provide specific speed of count data, but they stated that one machine is generally used per polling site. Generalizations about the effect of technology on wait times are difficult. In 2001, our mail survey found that 84 percent of jurisdictions nationwide were satisfied with the amount of voter wait time at the polling place during the November 2000 election, but that 13 percent of jurisdictions considered long lines at the polling places to be a major problem. However, we estimated that only 10 percent of jurisdictions nationwide collected information on the average amount of time that it took voters to vote. We were told by some jurisdictions that the length of time voters must wait is affected by ballots that include many races and issues. Some jurisdictions reported that their ballots were so long that it took voters a long time in the voting booth to read them and vote. As a result, lines backed up, and some voters had to wait for over an hour to cast their votes. Officials in one jurisdiction said that their voters experienced long wait times in part because redistricting caused confusion among voters, who often turned up at the wrong polling places. As these examples show, the voting system used is not always a major factor in voter wait times. However, processes that do depend on the system may affect the time that a voter must spend voting. For example, in precincts that use precinct-level counting technology for optical scan ballots, voters may place their ballots in the automatic feed slot of the tabulator. This process can add to voting time if the tabulator is designed to reject ballots that are undervoted, overvoted, or damaged, and the voter is given the opportunity to correct the ballot. Generally, buying DRE units is more expensive than buying optical scan systems. For a broad picture, consider the comparison that we made in 2001 of the costs of purchasing new voting equipment for local election jurisdictions based on three types of equipment: central-count optical scan equipment, precinct-count optical scan equipment, and touchscreen DRE units. Based on equipment cost information available in August 2001, we estimated that purchasing optical scan equipment that counted ballots at a central location would cost about $191 million. Purchasing an optical scan counter for each precinct that could notify voters of errors on their ballots would cost about $1.3 billion. Purchasing touchscreen DRE units for each precinct, including at least one unit per precinct that could accommodate blind, deaf, and paraplegic voters, would cost about $3 billion. For a given jurisdiction, the particular cost involved will depend on the requirements of the jurisdiction, as well as the particular equipment chosen. Voting equipment costs vary among types of voting equipment and among different manufacturers and models of the same type of equipment. For example, in 2001, DRE touchscreen unit costs ranged from $575 to $4,500. Similarly, unit costs for precinct-count optical scan equipment ranged from $4,500 to $7,500. Among other things, these differences can be attributed to differences in what is included in the unit cost as well as differences in the characteristics of the equipment. In addition to the equipment unit cost, an additional cost for jurisdictions is the software that operates the equipment, prepares the ballots, and tallies the votes (and in some cases, prepares the election results reports). Our vendor survey showed that although some vendors included the software cost in the unit cost of the voting equipment, most priced the software separately. Software costs for DRE and optical scan equipment could run as high as $300,000 per jurisdiction. The higher costs were generally for the more sophisticated software associated with election management systems. Because the software generally supported numerous equipment units, the software unit cost varied depending on the number of units purchased or the size of the jurisdiction. Other factors affecting the acquisition cost of voting equipment are the number and types of peripherals required. In general, DREs require more peripherals than do optical scan systems, which adds to their expense. For example, some DREs require smart cards, smart card readers, memory cartridges and cartridge readers, administrative workstations, and plug-in devices (for increasing accessibility for voters with disabilities). Touchscreen DREs may also offer options that affect the cost of the equipment, such as color versus black and white screens. In addition, most DREs and all optical scan units require voting booths, and most DREs and some precinct-based optical scan tabulators offer options for modems. Precinct-based optical scan tabulators also require ballot boxes to capture the ballots after they are scanned. Once jurisdictions acquire the voting equipment, they must also incur the cost to operate and maintain it, which can vary considerably. For example, in 2001, jurisdictions that used DREs reported a range of costs from about $2,000 to $27,000. Similarly, most jurisdictions that used optical scan equipment reported that operations and maintenance costs ranged from about $1,300 to $90,000. The higher ends of these cost ranges generally related to the larger jurisdictions. In fact, one large jurisdiction that used optical scan equipment reported that its operating costs were $545,000. In addition, the jurisdictions reported that these costs generally included software licensing and upgrades, maintenance contracts with vendors, equipment replacement parts, and supply costs. For decisions on whether to invest in new voting equipment, both initial capital costs (i.e., cost to acquire the equipment) and long- term support costs (i.e., operation and maintenance costs) are relevant. Moreover, these collective costs (i.e., life-cycle costs) need to be viewed in the context of the benefits the equipment will provide over its useful life. It is advisable to link these benefits directly to the performance characteristics of the equipment and the needs of the jurisdiction. The performance of any information technology system, including electronic voting systems, is heavily influenced by a number of factors, not the least of which is the quality of the system’s design and the effectiveness with which the system is implemented in an operational setting. System design and implementation, in turn, are a function of such things as how well the system’s requirements are defined, how well the system is tested, and how well the people that operate and use the system understand and follow the procedures that govern their interaction with it. Our work in 2001 raised concerns about the FEC’s voting system standards, and showed that practices relative to testing and implementation of voting systems varied across states and local jurisdictions. Like that of any information technology product, the design of a voting system starts with the explicit definition of what the system is to do and how well it is to do it. These requirements are then translated into design specifications that are used to develop the system. Organizations such as the Department of Defense and the Institute of Electrical and Electronics Engineers have developed guidelines for various types of systems requirements and for the processes that are important to managing the development of any system throughout its life cycle. These guidelines address types of product requirements (e.g., functional and performance), as well as documentation and process requirements governing the production of the system. In the case of voting systems, the FEC had assumed responsibility for issuing standards that embodied these requirements, a responsibility that HAVA has since assigned to the EAC. The FEC standards are nevertheless still the operative standards until the EAC updates them. These FEC-issued standards apply to system hardware, software, firmware, and documentation, and they span prevoting, voting, and postvoting activities. They also address, for example, requirements relating to system security; system accuracy and integrity; system auditability; system storage and maintenance; and data retention and transportation. In addition to these standards, some states and local jurisdictions have specified their own voting system requirements. In 2001, we cited a number of problems with the FEC-issued voting system standards, including missing elements of the standards. Accordingly, we made recommendations to improve the standards. Subsequently, the FEC approved the revised voting system standards on April 30, 2002. According to EAC commissioners with whom we spoke, the commission has inherited the FEC standards, but it plans to work with NIST to revise and strengthen them. To ensure that systems are designed and built in conformance with applicable standards, our work in 2001 found that three levels of tests are generally performed: qualification tests, certification tests, and acceptance tests. For voting systems, the FEC-issued standards called for qualification testing to be performed by independent testing authorities. According to the standards, this testing is to ensure that voting systems comply with both the FEC standards and the systems’ own design specifications. State standards define certification tests, which the states generally perform to determine how well the systems conform to individual state laws, requirements, and practice. Finally, state and local standards define acceptance testing, performed by the local jurisdictions procuring the voting systems. This testing is to determine whether the equipment, as delivered and installed, satisfies all the jurisdiction’s functional and performance requirements. Beyond these levels of testing, jurisdictions also perform routine maintenance and diagnostic activities to further ensure proper system performance on election day. Our 2001 work found that the majority of states (38) had adopted the FEC standards then in place, and thus these states required that the voting systems used in their jurisdictions passed qualification testing. In addition, we reported that qualified voting equipment had been used in about 49 percent (±7 percentage points) of jurisdictions nationwide that used DREs and about 46 percent (±7 percentage points) of jurisdictions nationwide that used optical scan technology. However, about 46 percent (±5 percentage points) reported that they did not know whether their equipment had been qualified. As we reported in 2001, 45 states and the District of Columbia told us that they had certification testing programs, and we estimate from our mail survey that about 90 percent of jurisdictions used state-certified voting equipment in the 2000 national election. In addition, we reported that most of the jurisdictions that had recently bought new voting equipment had conducted some form of acceptance testing. However, the processes and steps performed and the people who performed them varied. For example, in one jurisdiction that purchased DREs, election officials stated that testing consisted of a visual inspection, power-up, opening of polls, activation and verification of ballots, and closing of polls. In contrast, officials in another jurisdiction stated that they relied entirely on the vendor to test their DREs. In jurisdictions that used optical scan equipment, acceptance testing generally consisted of running decks of test cards. For example, officials from one jurisdiction stated that they tested each unit with the assistance of the vendor using a vendor-supplied test deck. Our 2001 work found that the processes and people involved in routine system maintenance, diagnostic, and pre-election day checkout activities varied from jurisdiction to jurisdiction. For example, about 90 percent of jurisdictions nationwide using DRE and optical scan technology had performed routine or manufacturer-suggested maintenance and checkout before the 2000 national election. However, our visits to 27 local election jurisdictions revealed variations in the frequency with which jurisdictions performed such routine maintenance. For example, some performed maintenance right before an election, while others performed maintenance regularly throughout the year. For example, officials in one jurisdiction that used DREs stated that they tested the batteries monthly. Proper implementation of voting systems is a matter of people knowing how to carry out appropriately designed processes to ensure that the technology performs as intended in an operational setting. According to the EAC commissioners, one of their areas of focus will be election administration processes and the people who carry out these processes. Examples include ballot preparation, voter education, recruiting and training poll workers, setting up the polls, running the election, and counting the votes. Ballot preparation. Whether ballots are electronic or paper, they need to be designed in a way that promotes voter understanding when they are actually used. Designing both optical scan and DRE ballots requires consideration of the different types of human interaction entailed and the application of some human factors expertise. For DREs, programming skills need to be applied to create the ballot and enter the ballot information onto an electronic storage medium, which is then uploaded to the unit. For optical scan systems, paper ballots need to be designed and printed in specified numbers for distribution to polling places; they may also be used for absentee balloting, usually in combination with printed mailing envelopes. Electronic “ballots” in DRE units do not require distribution separate from the distribution of the voting equipment itself; however, the use of DREs means that a separate technique is necessary for absentee ballots—generally paper ballots. Thus, the use of these units generally requires a mixed election system. Voter education. Implementation of any voting method requires that voters understand how to vote—that is, what conventions are followed. For optical scan systems, voters need to understand how to mark the ballots, they need to know what kinds of marker (type of pen or pencil) can be used, they need to be informed if a ballot must be marked on both sides, and so on. For DRE systems, voters need to understand how to select candidates or issues and understand that their votes are not cast until the cast vote button is pressed; for touchscreens, they need to know how to navigate the various screens presented to them. Voters also need to understand the procedure for write-in votes. In 2001, one jurisdiction had an almost 5 percent overvote rate because voters did not understand the purpose of the ballot section permitting write-in votes. Voters selected a candidate on the ballot and then wrote the candidate’s name in the write-in section of the ballot, thus overvoting and spoiling the ballot. In addition to voter education, how the system is programmed to operate can also address this issue. For example, precinct-count optical scan equipment can be programmed to return a voter’s ballot if the ballot is overvoted or undervoted and allow the voter to make changes. Poll worker recruitment and training. Poll workers need implementation training. They need to be trained not only in how to assist voters to use the voting system, but also in how to use the technology for the tasks poll workers need to perform. These tasks can vary greatly from jurisdiction to jurisdiction. When more sophisticated voting systems are used at polling sites, jurisdictions may find it challenging to find poll workers with the skills to implement and use newer technologies. In 2001, we quoted one election official who said that “it is increasingly difficult to find folks to work for $6 an hour. We are relying on older retired persons— many who can’t/won’t keep up with changes in the technology or laws. Many of our workers are 70+.” Setting up the polls. Proper setup of polling places raises a number of implementation issues related to the people, processes, and technology involved. For DREs, the need for appropriate power outlets and possibly network connections limits the sites that can be used as polling places. In addition, setting up, initializing, and sometimes networking DRE units are technically challenging tasks. Technicians and vendor representatives may be needed to perform these tasks or to assist poll workers with them. In addition, with DREs, computer security issues come into play that are different from those associated with the paper and pencil tools that voters use in optical scan systems. Besides the units themselves, many DRE systems use cards or tokens that must be physically secured. With optical scan equipment, the ballots must be physically secured. Further, if precinct-based tabulation is used with an optical scan system, the tabulation equipment must be protected from tampering. Running the election. Many implementation issues associated with running the election are associated with the interaction of voters with the technology. Although both DREs and optical scan systems are based on technologies that most voters will have encountered before, general familiarity is not enough to avoid voter errors. With optical scan, voter errors are generally related to improperly marked ballots: the wrong marking device, stray marks, too many marks (overvotes), and so on. As described already, DRE equipment is designed to minimize voter error (by preventing overvotes, for example), but problems can also occur with this voting method. For example, many DREs require the voter to push a cast vote button to record the vote. However, some voters forget to push this button and leave the polling place without doing so. Similarly, after pressing the final cast vote button, voters cannot alter their votes. In some cases, this button may be pressed by mistake—for example, a small child being held by a parent may knock or kick the final vote button before the parent has completed the ballot. The technology is not the only factor determining the outcome in these situations, as different jurisdictions have different rules and processes concerning such problems. In 2001, we reported that when voters forgot to press the cast vote button, one jurisdiction required that an election official reach under the voting booth curtain and push the cast vote button without looking at the ballot. However, another jurisdiction required that an election official invalidate the ballot and reset the machine for a new voter. Counting the votes. Finally, implementation of the processes for counting votes is affected both by the technology used and by local requirements. With DREs, votes are collected within each unit. Some contain removable storage media that can be taken from the voting unit and transported to a central location to be tallied. Others can be configured to electronically transmit the vote totals from the polling place to a central tally location. As described earlier, optical scan systems also vary in the way votes are counted, depending on whether precinct-based or centralized tabulation equipment is used. For optical scan systems, officials follow state and local regulations and processes to determine whether and how to count ballots that cannot be read by the tabulation equipment. Counting such ballots may involve decisions on how to judge voter intent, which are also generally governed by state and local regulations and processes. In addition, depending on the type of voting technology used, ways to perform recounts may differ. For optical scan devices, recounts can be both automatic and manual; as in the original vote counting, officials make decisions on counting ballots that cannot be read by the tabulation equipment and on voter intent. With DREs there is no separate paper ballot or record of the voter’s intention, and therefore election officials rely on the information recorded in the machine’s memory: that is, permanent (read only) electronic images of each of the “marked” ballots. The assurance that these images are an accurate record of the vote depends on several things, including the proper implementation of the processes involved in designing, maintaining, setting up, and using the technology. In 2001, we identified four key challenges confronting local jurisdictions in effectively using and replacing voting technologies. These challenges are not dissimilar to those faced by any organization seeking to leverage modern technology to support mission operations. The first two challenges are particularly relevant in the near term, as jurisdictions look to position themselves for this year’s national elections. The latter two are more relevant to jurisdictions’ strategic acquisition and use of modern voting systems. Maximizing the performance of the voting systems that jurisdictions have and plan to use in November 2004 means taking proactive steps between now and then to best ensure that systems perform as intended. These steps include activities aimed at securing, testing, and maintaining these systems. We reported in 2001 that although the vast majority of jurisdictions performed security, testing, and maintenance activities in one form or another, the extent and nature of these activities varied among jurisdictions and depended on the availability of resources (financial and human capital) committed to them. The challenge facing all voting jurisdictions will be to ensure that these activities are fully and properly performed, particularly in light of the serious security concerns that have been reported with DREs. As previously discussed in this testimony, jurisdictions need to manage the triad of people, processes, and technology as interrelated and interdependent parts of the total voting process. Given the amount of time that remains between now and the November 2004 elections, jurisdictions’ voting system performance is more likely to be influenced by improvements in poll worker system operation training, voter education about system use, and vote casting procedures than by changes to the systems themselves. The challenge for voting jurisdictions is thus to ensure that these people and process issues are dealt with effectively. Reliable measures and objective data are needed for jurisdictions to know whether the technology being used is meeting the needs of the user communities (both the voters and the officials who administer the elections). In 2001, we reported that the vast majority of jurisdictions were satisfied with the performance of their respective technologies in the November 2000 elections. However, this satisfaction was mostly based not on objective data measuring performance, but rather on the subjective impressions of election officials. Although these impressions should not be discounted, informed decisionmaking on voting technology investment requires more objective data. The challenge for jurisdictions is to define measures and begin collecting data so that they can definitely know how their systems are performing. Jurisdictions must be able to ensure that the technology will provide benefits over its useful life that are commensurate with life-cycle costs (acquisition as well as operations and maintenance) and that these collective costs are affordable and sustainable. In 2001, we reported that the technology type and configuration that jurisdictions employed varied depending on each jurisdiction’s unique circumstances, such as size and resource constraints, and that reliable data on life-cycle costs and benefits were not available. The challenge for jurisdictions is to view and treat voting systems as capital investments and to manage them as such, including basing decisions on technology investments on clearly defined requirements and reliable analyses of quantitative and qualitative return on investment. In closing, I would like to say again that electronic voting systems are an undeniably critical link in the overall election chain. While this link alone cannot make an election, it can break one. The problems that some jurisdictions have experienced and the serious concerns being surfaced by security experts and others highlight the potential for difficulties in the upcoming 2004 national elections if the challenges that we cited in 2001 and reiterate in this testimony are not effectively addressed. Although the EAC only recently began operations and is not yet at full strength, it needs to remain vigilant in its efforts to ensure that jurisdictions and voters are educated and well-informed about the proper implementation and use of electronic voting systems, and to ensure that jurisdictions take the appropriate steps—related to people, process, and technology—that are needed regarding security, testing, and maintenance. More strategically, the EAC needs to move swiftly to strengthen the voluntary voting system guidelines and the testing associated with enforcing these guidelines. Critical to the commission’s ability to do this will be the adequacy of resources at its disposal and the degree of cooperation it receives from entities at all levels of government. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information, please contact Randolph C. Hite at (202) 512-6256 or by e-mail at [email protected]. Other key contributors to this testimony were Barbara S. Collier, Deborah A. Davis, Richard B. Hung, John M. Ortiz, Jr., Maria J. Santos, and Linda R. Watson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The technology used to cast and count votes is one aspect of the multifaceted U.S. election process. GAO examined voting technology, among other things, in a series of reports that it issued in 2001 following the problems encountered in the 2000 election. In October 2002, the Congress enacted the Help America Vote Act, which, among other things, established the Election Assistance Commission (EAC) to assist in the administration of federal elections. The act also established a program to provide funds to states to replace older punch card and lever machine voting equipment. As this older voting equipment has been replaced with newer electronic voting systems over the last 2 years, concerns have been raised about the vulnerabilities associated with certain electronic voting systems. Among other things, GAO's testimony focuses on attributes on which electronic voting systems can be assessed, as well as design and implementation factors affecting their performance. GAO also describes the immediate and longer-term challenges confronting local jurisdictions in using any type of voting equipment, particularly electronic voting systems. An electronic voting system, like other automated information systems, can be judged on several bases, including how well its design provides for security, accuracy, ease of use, and efficiency, as well as its cost. For example, direct recording electronic systems offer advantages in ease of use because they can have features that accommodate voters with various disabilities, and they protect against common voter errors, such as overvoting (voting for more candidates than is permissible); a disadvantage of such systems is their capital cost and frequent lack of an independent paper audit trail. Advantages of optical scan voting equipment (another type of electronic voting system) include capital cost and the enhanced security associated with having a paper audit trail; disadvantages include lower ease of use, such as limited ability to accommodate voters with disabilities. One important determinant of voting system performance is how it is designed and developed, including the testing that determines whether the developed system performs as designed. In the design and development process, a critical factor is the quality of the specified system requirements as embodied in applicable standards or guidance. For voting technology, these voluntary standards have historically been problematic; the EAC has now been given responsibility for voting system guidelines, and it intends to update them. The EAC also intends to strengthen the process for testing voting system hardware and software. A second determinant of performance is how the system is implemented. In implementing a system, it is critical to have people with the requisite knowledge and skills to operate it according to well-defined and understood processes. The EAC also intends to focus on these people and process factors in its role of assisting in the administration of elections. In the upcoming 2004 national election and beyond, the challenges confronting local jurisdictions in using electronic voting systems are similar to those facing any technology user. These challenges include both immediate and more long term challenges. |
The new IRS Commissioner and IRS management have expressed a commitment to ensure that taxpayers are treated properly. Even so, problems with current management information systems make it impossible to determine the extent to which allegations of taxpayer abuse and other taxpayer complaints have been reported, or the extent to which actions have been taken to address the complaints and prevent recurrence of systemic problems. That is because, as we reported to you in 1996, information systems currently maintained by IRS, Treasury OIG, and the Department of Justice do not capture the necessary management information. These systems were designed as case tracking and resource management systems intended to serve the management information needs of particular functions, such as IRS Inspection’s Internal Security Division. None of these systems include specific data elements for “taxpayer abuse”; instead, they contain data elements that encompass broad categories of misconduct, taxpayer problems, and legal and administrative actions. Information contained in these systems relating to allegations and investigations of taxpayer abuse and other taxpayer complaints is not easily distinguishable from information on allegations and investigations that do not involve taxpayers. Consequently, as currently designed, the information systems cannot be used individually or collectively to account for IRS’ handling of instances of alleged taxpayer abuse. Officials of several organizations indicated to us that several information systems might include information related to taxpayer abuse allegations—five maintained by IRS, one by Treasury OIG, and two by Justice. (See attachment for a description of these systems.) also said the system could not be used to identify such instances without a review of specific case files. From our review of data from these systems for our 1996 report, we concluded that none of them, either individually or collectively, have common or comparable data elements that can be used to identify the number or outcomes of taxpayer abuse allegations or related investigations and actions. Rather, each system was developed to provide information for a particular organizational function, usually for case tracking, inventory, or other managerial purposes relative to the mission of that particular function. While each system has data elements that could reflect how some taxpayers have been treated, the data elements vary and in certain cases may relate to the same allegation and same IRS employee. Without common or comparable data elements and unique allegation and employee identifiers, these systems do not collect information in a consistent manner that could be used to accurately account for all allegations of taxpayer abuse. As we also reported in our 1996 report, IRS has not historically had a definition of taxpayer abuse. In response to the report, IRS adopted a definition for taxpayer complaints that included the following elements: (1) allegations of IRS employees’ violating laws, regulations, or the IRS Code of Conduct; (2) overzealous, overly aggressive, or otherwise improper behavior of IRS employees in discharging their official duties; and (3) breakdowns in IRS systems or processes that frustrate taxpayers’ ability to resolve issues through normal channels. Also in response to the report, IRS established a Customer Feedback System in October 1997, which IRS managers are to use to report allegations of improper employee behavior toward taxpayers. IRS used this system to support its first required annual reporting to Congress on taxpayers’ complaints through December 31, 1997. IRS officials acknowledged, however, that there were changes needed to ensure the accuracy and consistency of the reported data. The 1988 amendments to the Inspectors General Act, which created the Treasury OIG, did not consolidate IRS Inspection into the Treasury OIG, but authorized the Treasury OIG to perform oversight of IRS Inspection and conduct audits and investigations of the IRS as appropriate. The act also provided the Treasury OIG with access to taxpayer data under the provisions of Section 6103 of the Internal Revenue Code as needed to conduct its work, with some recording and reporting requirements for such access. Currently, Treasury OIG is responsible for investigating allegations of misconduct, waste, fraud, and abuse involving senior IRS officials, GS-15s and above, as well as IRS Inspection employees. Treasury OIG also has oversight responsibility for the overall operations of IRS Inspection. Since November 1994, Treasury OIG has had increased flexibility for referring allegations involving GS-15s to IRS for investigation or administrative action. The need to make more referrals of GS-15 level cases was due to resource constraints and an increased emphasis by Treasury OIG on investigations involving criminal misconduct and procurement fraud across all Treasury bureaus. In fiscal year 1996, Treasury OIG conducted 43 investigations—14 percent of the 306 allegations it received—many of which implicated senior IRS officials. Treasury OIG officials said that these investigations rarely involved allegations of taxpayer abuse because senior IRS officials and IRS Inspection employees usually do not interact directly with taxpayers. The IRS Chief Inspector, who reports directly to the IRS Commissioner, is responsible for conducting IRS investigations and internal audits done by IRS Inspection, as well as for coordinating IRS Inspection activities with Treasury OIG. IRS Inspection is to work closely with Treasury OIG in planning and performing its duties. IRS Inspection is also to provide information on its activities and results, as well as constraints or limitations placed on its activities, to Treasury OIG for incorporation into Treasury OIG’s Semiannual Report to Congress. Disputes that the IRS Chief Inspector may have with the IRS Commissioner are to be resolved through Treasury OIG and the Secretary of the Treasury, to whom the Treasury OIG reports. In September 1992, Treasury OIG issued Treasury Directive 40-01, which summarizes the authority vested in Treasury OIG and the reporting responsibilities of various Treasury bureaus. Treasury law enforcement bureaus, including IRS, are to (1) provide a monthly report to Treasury OIG concerning significant internal investigative and audit activities; (2) notify Treasury OIG immediately upon receiving allegations involving senior IRS officials, internal affairs employees, or IRS Inspection employees; and (3) submit written responses to Treasury OIG detailing actions taken or planned in response to Treasury OIG investigative reports and Treasury OIG referrals for agency management action. Under procedures established in a Memorandum of Understanding between Treasury OIG and IRS Commissioner in November 1994, the requirement for immediate referrals to Treasury OIG of all misconduct allegations covered in the Directive was reiterated and supplemented. Treasury OIG has the discretion to refer any allegation to IRS for appropriate action, that is, either investigation by IRS Inspection or administrative action by IRS management. If IRS officials believe that an allegation referred by Treasury OIG warrants Treasury OIG attention, they may refer the case back to Treasury OIG, requesting that Treasury OIG conduct an investigation. During our review for the 1996 report, Treasury OIG officials advised us that under the original 1992 Directive, they generally handled most allegations implicating Senior Executive Service (SES) and IRS Inspection employees, while reserving the right of first refusal on GS-15 employees. Under the procedures adopted in 1994, which were driven in part by resource constraints and Treasury OIG’s need to do more criminal misconduct and procurement fraud investigations across all Treasury bureaus, Treasury OIG officials stated they have generally referred allegations involving GS-15s and below to IRS for investigation or management action. The same is true for allegations against any employees, including those in the SES, involving administrative matters and allegations dealing primarily with disputes of tax law interpretation. of the allegations; referred 214 to IRS—either for investigation or administrative action; investigated 43; and closed 9 others for various administrative reasons. Treasury OIG officials stated that, based on their investigative experience, most allegations of wrongdoing by IRS staff that involve taxpayers do not involve senior-level IRS officials or IRS Inspection employees. Rather, these allegations typically involve IRS Examination and Collection employees who most often interact directly with taxpayers. Treasury OIG officials are to assess the adequacy of IRS’ actions in response to Treasury OIG investigations and referrals as follows: (1) IRS is required to make written responses on actions taken within 90 days and 120 days, respectively, on Treasury OIG investigative reports of completed investigations and Treasury OIG referrals for investigations or management action; (2) Treasury OIG investigators are to assess the adequacy of IRS’ responses before closing the Treasury OIG case; and (3) Treasury OIG’s Office of Oversight is to assess the overall effectiveness of IRS Inspection capabilities and systems through periodic operational reviews. In addition to assessing IRS’ responses to Treasury OIG investigations and referrals, each quarter, the Treasury Inspector General, Deputy Inspector General, and Assistant Inspector General for Investigations are to brief the IRS Commissioner, IRS Deputy Commissioner, and Chief Inspector on the status of allegations involving senior IRS officials, including those being investigated by Treasury OIG and those awaiting IRS action. referrals inclusion in discussions during quarterly Inspector General briefings with the IRS Commissioner. Since 1996, there has been some indication of problems between the two offices. Specifically, in its most recent Semiannual Report to Congress, Treasury OIG concluded, after reviewing IRS’ compliance with Treasury Directive 40-01, that “both IRS and Treasury OIG need to make improvements, particularly in the area of timely, prompt referrals.” It is not clear what steps Treasury OIG officials plan to take to resolve the problems. At the Committee’s September 1997 IRS oversight hearings, some IRS employees raised concerns about the effectiveness of IRS Inspection and its independence from undue pressures and influence from IRS management. Since that time, debate has continued on the issue of where IRS Inspection would be optimally placed organizationally to provide assurance that taxpayers are treated properly. This is not a new issue. During the debate preceding the passage of the 1988 amendments to the Inspectors General Act that established the Treasury OIG and left IRS Inspection intact, as well as on several other occasions since, concerns have been raised about the desirability of having a separate IRS Inspection Service. Historically, we have supported a strong statutory Treasury OIG, believing that such an office could provide independent oversight of the Department, including IRS. That is, reviews of IRS addressed to the Secretary of the Treasury, rather than the IRS Commissioner, should improve executive branch oversight of tax administration in general and provide greater assurance that taxpayers are treated properly, fairly, and courteously. We have also noted that under the statute, Treasury OIG is authorized to enhance the protection of taxpayer rights by conducting periodic independent reviews of IRS dealings with taxpayers and IRS procedures affecting taxpayers. We have also recognized that, to meet his managerial responsibilities, the IRS Commissioner needs an internal capability to review the effectiveness of IRS programs. IRS Inspection has provided Commissioners with investigative and audit capabilities to evaluate IRS programs since 1952. IRS Inspection currently has roughly 1,200 authorized staff in its budget who are split about equally between its two divisions, Internal Security and Internal Audit. The Treasury OIG, on the other hand, has fewer than 300 authorized staff to provide oversight of IRS Inspection activities as well as to carry out similar investigations and audits for Treasury and its 10 other very diverse bureaus. IRS officials have been concerned that if IRS Inspection is transferred to the Treasury OIG, the transferred resources will be used to investigate or audit other Treasury bureaus to the detriment of critical IRS oversight. The Inspectors General Act provides guidance on the authorities, qualifications, safeguards, resources, and reporting requirements needed to ensure independent investigative and audit capabilities. No matter where IRS Inspection is placed organizationally, certain mechanisms need to be in place to ensure that it is held accountable and can achieve its mission without undue pressures or influence. For example, a key component of accountability and protection against undue pressures or influence is reporting of investigative and audit activities and findings to both those responsible for agency management and oversight. Another IRS organization responsible for protecting the rights of taxpayers is the Taxpayer Advocate. The position was originally codified in the Taxpayer Bill of Rights 1 as the Taxpayer Ombudsman, although IRS has had the underlying Problem Resolution Program (PRP) in place since 1979. In the Taxpayer Bill of Rights 2, the Taxpayer Advocate and the Office of the Taxpayer Advocate replaced the Taxpayer Ombudsman position and the headquarters PRP staff. The authorities and responsibilities of this new office were expanded, for example, to address taxpayer cases involving IRS enforcement actions and refunds. The most significant change may have been to emphasize that the Advocate and those assigned to the Advocate’s Office are expected to view issues from the taxpayers’ perspective and find ways to alleviate individual taxpayer concerns as well as systemic problems. The Advocate reported that it resolved 237,103 cases in fiscal year 1997. Its reported activities included establishing cases to resolve taxpayer concerns, providing relief to taxpayers with hardships, resolving cases in a proper and timely manner, and analyzing and addressing factors contributing to systemic problems. The report also discussed activities and initiatives and proposed solutions for systemic problems. Even with the enhanced legislative authorities and numerous activities and initiatives, questions about the effectiveness of the Taxpayer Advocate persist. The questions relate to the Advocate’s (1) organizational independence within IRS; (2) resource commitments to achieve its mission; and (3) ability to identify and correct systemic problems adversely affecting taxpayers. We have recently initiated a study of the Advocate’s Office to address these questions about the Advocate’s effectiveness. The first question centers on the Advocate’s organizational placement at headquarters and field offices. The Taxpayer Advocate reports to the IRS Commissioner. Taxpayer Advocates in the field report to the IRS Regional Commissioner, District Director, or Service Center Director in their particular geographic area. Thus, these field advocate officials report to the IRS executives who are responsible for the operations that may have frustrated taxpayers and created the Advocate’s caseloads. The second question involves the manner in which the Advocate’s Office is staffed and funded. For fiscal year 1998, the Advocate’s Office was authorized 442 positions to handle problem resolution duties. These authorized Advocate Office staff must rely on assistance from more than 1,000 other field employees, on a full-time or part-time basis, to carry-out these duties. These 1,000 employees are funded by their functional office, such as Collection or Customer Service. While working PRP cases, these employees receive program direction and guidance from the Advocate’s Office. They are administratively responsible to their Regional Commissioners, District Directors, or Service Center Directors—again, the same managers responsible for the operations that may have frustrated taxpayers. The third question was debated during oversight hearings last year regarding the Advocate’s ability to identify and correct IRS systems or processes that have frustrated taxpayers. The question historically has been the amount of attention afforded the analysis of problem resolution cases to identify systemic issues in light of the Advocate’s workload and available staff. The more recent question, however, has been the ability of the Advocate’s Office to bring about needed administrative or legislative changes to address systemic problems. detract from its ability to focus on its overall mission. Our recently initiated study is designed to provide such an assessment of the Advocate’s effectiveness. Two of the IRS systems—Inspection’s Internal Security Management Information System (ISMIS) and Human Resources’ Automated Labor and Employee Relations Tracking System (ALERTS)—are designed to capture information on cases involving employee misconduct, which may also involve taxpayer abuse. ISMIS is designed to determine the status and outcome of Internal Security investigations of alleged employee misconduct; ALERTS is designed to track disciplinary actions taken against employees. While ISMIS and ALERTS both track aspects of alleged employee misconduct, these systems do not share common data elements or otherwise capture information in a consistent manner. IRS also has three systems that include information on concerns raised by taxpayers. These systems include two maintained by the Office of Legislative Affairs—the Congressional Correspondence Tracking System and the IRS Commissioner’s Mail Tracking System—as well as the Taxpayer Advocate’s system known as the Problem Resolution Office Management Information System (PROMIS). The two Legislative Affairs systems are designed to track taxpayer inquiries, including those made through congressional offices, to ensure that responses are provided by appropriate IRS officials. PROMIS is to track similar inquiries to ensure that taxpayers’ problems are resolved and to determine whether the problems are recurring in nature. Treasury OIG has an information system known as the Treasury OIG Office of Investigations Management Information System. It is designed to track the status and outcomes of Treasury OIG investigations as well as the status and outcomes of actions taken by IRS in response to Treasury OIG investigations and referrals. Justice has two information systems that include data that may be related to taxpayer abuse allegations and investigations. The Executive Office for the U.S. Attorneys maintains a Centralized Caseload System that is designed to consolidate the status and results of civil and criminal prosecutions conducted by U.S. Attorneys throughout the country. Cases involving criminal misconduct by IRS employees are to be referred to and may be prosecuted by the U.S. Attorney in the particular jurisdiction in which the alleged misconduct occurred. The Tax Division of Justice also maintains a Case Management System that is designed for case tracking, time reporting, and statistical analysis of litigation cases the Division conducts. Lawsuits against either IRS or IRS employees are litigated by the Tax Division, with representation provided to IRS employees if the Tax Division determines that the actions taken by the employees were within the scope of employment. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the: (1) adequacy of the Internal Revenue Service's (IRS) controls over the treatment of taxpayers; (2) responsibilities of the Offices of the Chief Inspector (IRS Inspection) and the Department of the Treasury Office of the Inspector General (OIG) in investigating allegations of taxpayer abuse and employee misconduct; (3) organizational placement of IRS Inspection; and (4) role of the Taxpayer Advocate in handling taxpayer complaints. GAO noted that: (1) in spite of IRS management's heightened awareness of the importance of treating taxpayers properly, GAO remains unable to reach a conclusion as to the adequacy of IRS' controls to ensure fair treatment; (2) this is because IRS and other federal information systems that collect information related to taxpayer cases do not capture the necessary management information to identify instances of abuse that have been reported and actions taken to address them and to prevent recurrence of those problems; (3) Treasury OIG and IRS Inspection have separate and shared responsibilities for investigating allegations of employee misconduct and taxpayer abuse; (4) IRS Inspection has primary responsibility for investigating and auditing IRS employees, programs, and internal controls; (5) Treasury OIG is responsible for the oversight of IRS Inspection investigations and audits and may perform selective investigations and audits at IRS; (6) the two offices share some responsibilities as reflected in a 1994 IRS Commissioner-Treasury OIG Memorandum of Understanding; (7) in the Committee's September 1997 hearings, questions were raised about the independence of IRS Inspection; (8) subsequently, suggestions have been made to remove IRS Inspection from IRS and place it in Treasury OIG; (9) regardless of where IRS Inspection is placed organizationally, within IRS or Treasury OIG, mechanisms need to be in place to ensure its accountability and its ability to focus on its mission independent from undue pressures or influences; (10) the Inspectors General Act as amended in 1988, provides guidance on the authorities, qualifications, safeguards, resources, and reporting requirements needed to ensure independent investigation and audit capabilities; (11) in 1979, the Taxpayer Ombudsman was established administratively within IRS to advocate for taxpayers and assume authority for IRS' Problem Resolution Program; (12) in 1988, this position was codified in the Taxpayer Bill of Rights 1; (13) in 1996, the Taxpayer Bill of Rights 2 replaced the Ombudsman with the Taxpayer Advocate and expanded the responsibilities of the new Office of the Taxpayer Advocate; (14) the Advocate was charged under the legislation with helping taxpayers resolve their problems with the IRS and with identifying and resolving systemic problems; and (15) it is now nearly 20 years after the creation of the first executive-level position in IRS to advocate for taxpayers, and questions about the effectiveness of the advocacy continue to be asked. |
Under A-76, commercial activities may be converted to or from contractor performance either by direct conversion or by cost comparison. Under direct conversion, specific conditions allow commercial activities to be moved from government or contract performance without a cost comparison study (for example, for activities involving 10 or fewer civilians). Generally, however, commercial functions are to be converted to or from contract performance by cost comparison, whereby the estimated cost of government performance of a commercial activity is compared to the cost of contractor performance in accordance with the principles and procedures set forth in Circular A-76 and the revised supplemental handbook. As part of this process, the government identifies the work to be performed (described in the performance work statement), prepares an in-house cost estimate based on its most efficient organization, and compares it with the winning offer from the private sector. According to A-76 guidance, an activity currently performed in-house is converted to performance by the private sector if the private sector offer is either 10 percent lower than the direct personnel costs of the in-house cost estimate or is $10 million less (over the performance period) than the in-house cost estimate. OMB established this minimum cost differential to ensure that the government would not convert performance for marginal savings. The handbook also provides an administrative appeals process. An eligible appellant must submit an appeal to the agency in writing within 20 days of the date that all supporting documentation is made publicly available. Appeals are supposed to be adjudicated within 30 days after they are received. Private sector offerors who believe that the agency has not complied with applicable procedures have additional avenues of appeal. They may file a bid protest with the General Accounting Office or file an action in a court of competent jurisdiction. Circular A-76 requires agencies to maintain annual inventories of commercial activities performed in-house. A similar requirement was included in the 1998 FAIR Act, which directs agencies to develop annual inventories of their positions that are not inherently governmental. The fiscal year 2000 inventory identified approximately 850,000 full-time equivalent commercial-type positions, of which approximately 450,000 were in DOD. OMB has not yet released DOD’s inventory for 2001. DOD has been the leader among federal agencies in recent years in its use of OMB Circular A-76, with very limited use occurring in other agencies. However, in 2001, OMB signaled its intention to direct greater use of the circular on a government-wide basis. In a March 9, 2001, memorandum to the heads and acting heads of departments and agencies, the OMB Deputy Director directed agencies to take action in fiscal year 2002 to directly convert or complete public-private competitions of not less than 5 percent of the full-time equivalent positions listed in their FAIR Act inventories. Subsequent guidance expanded the requirement by 10 percent in 2003, with the ultimate goal of competing at least 50 percent. In 1999, DOD began to augment its A-76 program with what it terms strategic sourcing. Strategic sourcing may encompass consolidation, restructuring, or reengineering activities; privatization; joint ventures with the private sector; or the termination of obsolete services. Strategic sourcing can involve functions or activities regardless of whether they are considered inherently governmental, military essential, or commercial. I should add that these actions are recognized in the introduction to the A-76 handbook as being part of a larger body of options, in addition to A-76, that agencies must consider as they contemplate reinventing government operations. Strategic sourcing initially does not involve A-76 competitions between the public and the private sector, and the Office of the Secretary of Defense and service officials have stressed that strategic sourcing may provide smarter decisions because it determines whether an activity should be performed before deciding who should perform it. However, these officials also emphasized that strategic sourcing is not intended to take the place of A-76 studies and that positions examined under the broader umbrella of strategic sourcing may be subsequently considered for study under A-76. After several years of limited use of Circular A-76, the deputy secretary of defense gave renewed emphasis to the A-76 program in August 1995 when he directed the services to make outsourcing of support activities a priority in an effort to reduce operating costs and free up funds to meet other priority needs. The effort was subsequently incorporated as a major initiative under the then secretary’s Defense Reform Initiative, and the program became known as competitive sourcing—in recognition of the fact that either the public or the private sector could win competitions. A-76 goals for the number of positions to be studied have changed over time, and out-year study targets are fewer than in previous years. However, future study targets could be impacted by the current administration’s emphasis on reliance on the private sector for commercial activities. The number of positions planned for study and the timeframes for accomplishing those studies have changed over time in response to difficulties in identifying activities to be studied. In 1997, DOD’s plans called for about 171,000 positions to be studied by the end of fiscal year 2003. In February 1999, we reported that DOD had increased this number to 229,000 but had reduced the number of positions to be studied in the initial years of the program. In August 2000, DOD decreased the number of positions to be studied under A-76 to about 203,000, added about 42,000 Navy positions for consideration under strategic sourcing, and extended the program to fiscal year 2005. Last year we noted that DOD had reduced the planned number to study to approximately 160,000 positions under an expanded time frame extending from 1997 to 2007. It also planned to study about 120,000 positions under strategic sourcing during that timeframe. More recently, DOD officials told us that the A-76 study goal for fiscal years 1997-2007 is now approximately 183,000 positions—135,000 between fiscal years 1997-2001, and 48,000 between fiscal years 2002-2007. It projects that it will study approximately 144,000 positions under strategic sourcing. To what extent the A-76 study goals are likely to change in the future could be a function of changes in inventories of commercial activities and continuing management emphasis on competitive sourcing. Although DOD’s fiscal year 2001 inventory of commercial activities has not been publicly released, we have noted some reductions between previous inventories as the department has gained experience in completing them. In reporting on our analysis of DOD’s initial FAIR Act inventory, we cited the need for more consistency in identifying commercial activities. We found that the military services and defense agencies did not always consistently categorize similar activities. We have not had an opportunity to analyze more recent inventories to determine to what extent improved guidance may have helped to increase consistency in categorizing activities. At the same time, we also previously reported that a number of factors could reduce the number of additional functions studied under A-76. For example, we noted that factors such as geographic dispersion of positions and the inability to separate commercial activities from inherently governmental activities could limit the number of inventory positions studied. Likewise, the inventory already makes provision for reducing the number of positions eligible for competition such as where performance by federal employees was needed because of national security or operational risk concerns. On the other hand, The President’s Management Agenda, Fiscal Year 2002, notes “Agencies are developing specific performance plans to meet the 2002 goal of completing public-private or direct conversion competition on not less than five percent of the full-time equivalent employees listed on the FAIR Act inventories. The performance target will increase by 10 percent in 2003.” Additionally, DOD’s Quadrennial Defense Review Report, September 30, 2001, states that the department should “Focus DOD ‘owned’ resources on excellence in those areas that contribute directly to warfighting. Only those functions that must be performed by DOD should be kept by DOD. Any function that can be provided by the private sector is not a core government function. Traditionally, ‘core’ has been very loosely and imprecisely defined and too often used as a way of protecting existing arrangements.” We have not assessed to what extent efforts in this area are likely to strengthen emphasis on A-76. As we tracked DOD’s progress in implementing its A-76 program since the mid-to late-1990s, we identified a number of challenges and concerns that have surrounded the program—issues that other agencies may encounter as they seek to respond to the administration’s emphasis on competitive sourcing. They include (1) the time required to complete the studies, (2) cost and resources to conduct and implement the studies, (3) selecting and grouping positions to compete, and (4) developing and maintaining reliable estimates of projected savings expected from the competitions. These need not be reasons to avoid A-76 studies but are factors that need to be taken into consideration in planning for the studies. Individual A-76 studies in DOD have taken longer than initially projected. In launching its A-76 program, some DOD components made overly optimistic assumptions about the amount of time needed to complete the competitions. For example, the Army initially projected that it would take 13 to 21 months to complete studies, depending on their size. The Navy initially projected completing its studies in 12 months. The numbers were subsequently adjusted upward, and the most recent available data indicate that the studies take on average about 22 months for single-function and 31 months for multifunction studies. Agencies need to keep these timeframes in mind when projecting resources required to support the studies and timeframes for when savings are expected to be realized—and may need to revisit these projections as they gain experience under the program. Once DOD components found that the studies were taking longer than initially projected, they realized that a greater investment of resources would be needed than originally planned to conduct the studies. For example, the 2001 president’s budget showed a wide range of projected study costs, from about $1,300 per position studied in the Army to about $3,700 in the Navy. Yet, various officials expressed concern that these figures underestimated the costs of performing the studies. While the costs they cited varied, some ranged up to several thousand dollars per position. One factor raising costs was the extent to which the services used contractors to facilitate completion of the studies. Given differences in experience levels between DOD and other agencies in conducting A-76 studies, these other agencies may need to devote greater resources to training or otherwise obtaining outside assistance in completing their studies. In addition to study costs, significant costs can be incurred in implementing the results of the competitions. Transition costs include the separation costs for civilian employees who lose their jobs as a result of competitions won by the private sector or when in-house organizations require a smaller civilian workforce. Such separation costs include the costs of voluntary early retirement, voluntary separation incentives, and involuntary separations through reduction-in-force procedures. Initially, we found that DOD budget documents had not fully accounted for such costs in estimating savings that were likely to result from their A-76 studies. More recently, we found that the Department had improved its inclusion of study and transition costs in its budget documents. Selecting and grouping functions and positions to compete can be difficult. Because most services faced growing difficulties in or resistance to finding enough study candidates to meet their A-76 study goals, the goals and time frames for completing studies changed over time; and DOD ultimately approved strategic sourcing as a way to complement its A-76 program and help achieve its savings goals. Guidelines implementing the FAIR Act permit agencies to exclude certain commercial activities from being deemed eligible for competition such as patient care in government hospitals. Additionally, as experienced by DOD, factors such as geographic dispersion of positions and the inability to separate commercial activities from inherently governmental activities could limit the number of inventory positions studied. It becomes important to consider such factors in determining what portions of the FAIR inventories are expected to be subject to competition. Considerable questions have been raised concerning to what extent DOD has realized savings from its A-76 studies. In part, these concerns were exacerbated by the lack of a reliable system for capturing initial net savings estimates and updating them as needed and by other difficulties associated with the lack of precision often associated with savings estimates. Our work has shown that while significant savings are being achieved by DOD’s A-76 program, it has been difficult to determine precisely the magnitude of those savings. Savings may be limited in the short-term because up-front investment costs associated with conducting and implementing the studies must be absorbed before long-term savings begin to accrue. Several of our reports in recent years have highlighted these issues. For example, we reported in March 2001 that A-76 competitions had reduced estimated costs of Defense activities primarily by reducing the number of positions needed to perform those activities under study. This is true regardless of whether the government’s in-house organization or the private sector wins the competition. Both government and private sector officials with experience in such studies have stated that, in order to be successful in an A-76 competition, they must seek to reduce the number of positions required to perform the function being studied. Related actions may include restructuring and reclassifying positions and using multiskill and multirole employees to complete required tasks. In December 2000, we reported on DOD’s savings estimates from a number of completed A-76 studies. We noted that DOD had reported cost reductions of about 39 percent, yielding an estimated $290 million savings in fiscal year 1999. We also agreed that individual A-76 studies were producing savings but stressed difficulty in quantifying the savings precisely for a number of reasons: Because of an initial lack of DOD guidance on calculating costs, baseline costs were sometimes calculated on the basis of average salaries and authorized personnel levels rather than on actual numbers. DOD’s savings estimates did not take into consideration the costs of conducting the studies and implementing the results, which of course must be offset before net savings begin to accrue. There were significant limitations in the database DOD used to calculate savings. Savings become more difficult to assess over time as workload requirements or missions change, affecting program costs and the baseline from which savings were initially calculated. Our August 2000 report assessed the extent to which there were cost savings from nine A-76 studies conducted by DOD activities. The data showed that DOD realized savings from seven of the cases, but overall less than Defense components had initially projected. Each of the cases presented unique circumstances that limited our ability to precisely calculate savings. Some suggested lower savings. Others suggested higher savings than initially identified. In two cases, DOD components had included cost reductions unrelated to the A-76 studies as part of their projected savings. Additionally, baseline cost estimates used to project savings were usually calculated using an average cost of salary and benefits for the number of authorized positions, rather than the actual costs of the positions. The latter calculation would have been more precise. In four of the nine cases, actual personnel levels were less than authorized. While most baseline costs estimates were based largely on personnel costs, up to 15 percent of the costs associated with the government’s most efficient organizations’ plans or the contractors’ offers were not personnel costs. Because these types of costs were not included in the baseline, a comparison of the baseline with the government’s most efficient organization or contractor costs may have resulted in understating cost savings. On the other hand, savings estimates did not reflect study and implementation costs, which reduced savings in the short term. DOD has revised its information systems to better track the estimated and actual costs of activities studied but not to revise previous savings estimates. DOD is also emphasizing the development of standardized baseline cost data to determine initial savings estimates. In practice, however, many of the cost elements that are used in A-76 studies will continue to be estimated because DOD lacks a cost accounting system to measure actual costs. Further, reported savings from A-76 studies will continue to have some element of uncertainty and imprecision and will be difficult to track in the out-years because workload requirements and missions change, affecting program costs and the baseline from which savings are calculated. Although comprising a relatively small portion of the government’s overall service contracting activity, competitive sourcing under Circular A-76 has been the subject of much controversy because of concerns about the process raised both by the public and private sectors. Federal managers and others have been concerned about organizational turbulence that typically follows the announcement of A-76 studies. Government workers have been concerned about the impact of competition on their jobs, their opportunity for input into the competitive process, and the lack of parity with industry offerors to protest A-76 decisions. Industry representatives have complained about the fairness of the process and the lack of a “level playing field” between the government and the private sector in accounting for costs. Concerns also have been registered about the adequacy of oversight of the competition winners’ subsequent performance, whether won by the public or private sector. Amid these concerns over the A-76 process, the Congress enacted section 832 of the National Defense Authorization Act for Fiscal Year 2001. The legislation required the comptroller general to convene a panel of experts to study the policies and procedures governing the transfer of commercial activities for the federal government from government to contractor personnel. The panel, which Comptroller General David M. Walker chairs, includes senior officials from DOD, OMB, the Office of Personnel Management, private industry, federal labor organizations, and academia. The Commercial Activities Panel, as it is known, is required to report its findings and recommendations to the Congress by May 1, 2002. The panel had its first meeting on May 8, 2001, at which time it adopted a mission statement calling for improving the current framework and processes so that they reflect a balance among taxpayer interests, government needs, employee rights, and contractor concerns. Subsequently, the panel held three public hearings. At the first hearing on June 11, in Washington, D.C., over 40 individuals representing a wide spectrum of perspectives presented their views. The panel subsequently held two additional hearings, on August 8 in Indianapolis, Indiana, and on August 15 in San Antonio, Texas. The hearing in San Antonio specifically addressed OMB Circular A-76, focusing on what works and what does not in the use of that process. The hearing in Indianapolis explored various alternatives to the use of A-76 in making sourcing decisions at the federal, and local levels. Since completion of the field hearings, the panel members have met in executive session several times, augmented between meetings by work of staff to help them (1) gather background information on sourcing trends and challenges, (2) identify sourcing principles and criteria, (3) consider A-76 and other sourcing processes to assess what’s working and what’s not, and (4) assess alternatives to the current sourcing processes. Panel deliberations continue with the goal of meeting the May 1 date for a report to the Congress. This concludes my statement. I would be pleased to answer any questions you or other members of the committee may have at this time. Contacts and Acknowledgment For further contacts regarding this statement, please contact Barry W. Holman at (202) 512-8412 or Marilyn Wasleski at (202) 512-8436. Other individuals making key contributions to this statement include Debra McKinney, Donald Bumgardner, Jane Hunt, Nancy Lively, Stephanie May, and Judith Williams. DOD Competitive Sourcing: A-76 Program Has Been Augmented by Broader Reinvention Options. GAO-01-907T. Washington, D.C.: June 28, 2001. DOD Competitive Sourcing: Effects of A-76 Studies on Federal Employees’ Employment, Pay, and Benefits Vary. GAO-01-388. Washington, D.C.: March 16, 2001. DOD Competitive Sourcing: Results of A-76 Studies Over the Past 5 Years. GAO-01-20. Washington, D.C.: December 7, 2000. DOD Competitive Sourcing: More Consistency Needed in Identifying Commercial Activities. GAO/NSIAD-00-198. Washington, D.C.: August 11, 2000. DOD Competitive Sourcing: Savings Are Occurring, but Actions Are Needed to Improve Accuracy of Savings Estimates. GAO/NSIAD-00-107. Washington, D.C.: August 8, 2000. DOD Competitive Sourcing: Some Progress, but Continuing Challenges Remain in Meeting Program Goals. GAO/NSIAD-00-106. Washington, D.C.: August 8, 2000. Competitive Contracting: The Understandability of FAIR Act Inventories Was Limited. GAO/GGD-00-68. Washington, D.C.: April 14, 2000. DOD Competitive Sourcing: Potential Impact on Emergency Response Operations at Chemical Storage Facilities Is Minimal. GAO/NSIAD-00-88. Washington, D.C.: March 28, 2000. DOD Competitive Sourcing: Plan Needed to Mitigate Risks in Army Logistics Modernization Program. GAO/NSIAD-00-19. Washington, D.C.: October 4, 1999. DOD Competitive Sourcing: Air Force Reserve Command A-76 Competitions. GAO/NSIAD-99-235R. Washington, D.C.: September 13, 1999. DOD Competitive Sourcing: Lessons Learned System Could Enhance A-76 Study Process. GAO/NSIAD-99-152. Washington, D.C.: July 21, 1999. Defense Reform Initiative: Organization, Status, and Challenges. GAO/NSIAD-99-87. Washington, D.C.: April 21, 1999. Quadrennial Defense Review: Status of Efforts to Implement Personnel Reductions in the Army Materiel Command. GAO/NSIAD-99-123. Washington, D.C.: March 31, 1999. Defense Reform Initiative: Progress, Opportunities, and Challenges. GAO/T-NSIAD-99-95. Washington, D.C.: March 2, 1999. Force Structure: A-76 Not Applicable to Air Force 38th Engineering Installation Wing Plan. GAO/NSIAD-99-73. Washington, D.C.: February 26, 1999. Future Years Defense Program: How Savings From Reform Initiatives Affect DOD’s 1999-2003 Program. GAO/NSIAD-99-66. Washington, D.C.: February 25, 1999. DOD Competitive Sourcing: Results of Recent Competitions. GAO/NSIAD-99-44. Washington, D.C.: February 23, 1999. DOD Competitive Sourcing: Questions About Goals, Pace, and Risks of Key Reform Initiative. GAO/NSIAD-99-46. Washington, D.C.: February 22, 1999. OMB Circular A-76: Oversight and Implementation Issues. GAO/T-GGD-98-146. Washington, D.C.: June 4, 1998. Quadrennial Defense Review: Some Personnel Cuts and Associated Savings May Not Be Achieved. GAO/NSIAD-98-100. Washington, D.C.: April 30, 1998. Competitive Contracting: Information Related to the Redrafts of the Freedom From Government Competition Act. GAO/GGD/NSIAD-98-167R. Washington, D.C.: April 27, 1998. Defense Outsourcing: Impact on Navy Sea-Shore Rotations. GAO/NSIAD-98-107. Washington, D.C.: April 21, 1998. Defense Infrastructure: Challenges Facing DOD in Implementing Defense Reform Initiatives. GAO/T-NSIAD-98-115. Washington, D.C.: March 18, 1998. Defense Management: Challenges Facing DOD in Implementing Defense Reform Initiatives. GAO/T-NSIAD/AIMD-98-122. Washington, D.C.: March 13, 1998. | The Department of Defense (DOD) has been at the forefront of federal agencies in using the OMB Circular A-76 process. In 1995, DOD made it a priority to reduce operating costs and free funds for other needs. DOD has also augmented the A-76 program with what it terms strategic sourcing--a broader array of reinvention and reengineering options that may not necessarily involve A-76 competitions. The number of positions--at one point 229,000--that DOD planned to study and the time frames for the studies have varied. Current plans are to study about 183,000 positions between fiscal years 1997 and 2007. Changes in the inventory of commercial activities and the current administration's sourcing initiatives could change the number of positions studied in the future. However, GAO has not evaluated the extent to which these changes might occur. DOD's A-76 program has faced several challenges that may provide valuable lessons learned for other federal agencies. These lessons include the following: (1) studies took longer than initially projected, (2) costs and resources required for the studies were underestimated, (3) selecting and grouping functions to compete can be difficult, and (4) determining and maintaining reliable estimates of savings were difficult. The Commercial Activities Panel is studying and has held public hearings about the policies and procedures, including the A-76 process, and the transfer of commercial activities from government personnel to contractors. The panel, comprised of federal and private sector experts, is required to report its findings and recommendations to Congress by May 2002. |
MCC undertakes a process each fiscal year to identify countries as candidates for MCA assistance. MCC uses per capita income data to identify two pools of countries as eligible candidates: low-income countries and lower-middle-income countries. Candidate countries also must not be statutorily barred from receiving U.S. assistance. MCC’s Board of Directors then uses quantitative indicators to assess a candidate country’s policy performance. To be eligible for MCA assistance, a country must pass the indicator for control of corruption and at least one-half of the indicators in each of the following three categories: ruling justly, investing in people, and encouraging economic freedom. To pass an indicator test, a country must score better than at least one-half of the other candidates (above the median) in its income group. If the policy performance of a country implementing a compact declines, the board can suspend or terminate the compact. Eligible countries may develop and propose projects, with guidance from MCC, with the goal of achieving economic growth and poverty reduction. MCC conducts an initial peer review of the country’s proposal, including an examination of proposed accountability and procurement structures, project scope, preliminary cost estimates, and the feasibility of implementing projects within the 5-year compact period. MCC also may provide 609(g) funds to the country to assist in compact development. If MCC accepts the proposal, it negotiates and signs a compact with the eligible country, committing the full amount of the compact. After compact signature, the MCA completes additional agreements, budgets, and plans prior to entry-into-force. At entry-into-force, MCC obligates and begins disbursing compact funds, and compact implementation begins. Following MCC’s internal reorganization in October 2007, MCC revamped its compact development process to include greater up-front engagement with eligible countries and assistance in conducting needed studies and establishing management structures. MCC’s 2010 Congressional Budget Justification notes that it revised the phases of compact development in an effort to address the challenges and problems it encountered with current compacts. The three compacts we selected for review vary in the type and size of projects funded, but each devotes more than one-half of compact funds to infrastructure projects, such as roadways, bridges, and ports. Figures 1, 2, and 3 provide compact obligations by country. As of August 2009, these three compacts were in their 4th year of 5 years of compact implementation. MCC has developed management and control guidance and structures to implement the statutory requirements for fiscal accountability and for open, fair, and competitive procurements. MCC and MCAs have established processes for controls over compact funds; the procurement of required goods, works, and services; and the development and management of contracts after award. MCC compacts and related documents include sections on fiscal accountability that describe the agreement between MCC and the recipient country in areas such as financial management and procurement practices. According to MCC’s Fiscal Year 2007 Guidance for Compact-Eligible Countries, two key entities are generally involved in fiscal accountability. First, the country must authorize an accountable entity to oversee the MCC program and its components, allocate resources, oversee and implement a financial plan, approve expenditures and procurements, and be accountable for MCC program results. Second, the compact typically will require a fiscal agent for MCC-funded activities that is responsible for certain aspects of fiscal accountability, such as funds controls and, in some cases, procurement management. The MCC fiscal accountability framework is depicted in figure 4. To address financial management, MCC requires each MCA to adopt a FAP that clearly documents the policies and procedures, including internal controls, that will help ensure appropriate fiscal accountability over the use of MCC-provided funds. In its Fiscal Year 2007 Guidance for Compact-Eligible Countries, MCC provided the MCAs with information that the MCAs should consider when developing their policies and procedures. For example, the MCAs should ensure that, in developing procedures for their disbursements, they consider funds control and documentation (i.e., procedures for authorizing, verifying, and recording a transaction); separation of duties and other internal controls (i.e., procedures for segregating approval and processing duties); and procedures related to the reconciliation of funds. MCC, as a component of this framework, is responsible for reviewing and approving certain policy documents and for ensuring that financial controls are adequately structured and implemented for each country. MCC assesses the status of financial management at the compact country through the review of reports, in particular annual financial audits of the MCA financial statements and Quarterly Disbursement Reports. MCC requires annual financial audits of the resources managed by the MCAs to assess whether funds received and costs incurred are recorded in conformity with the terms of the compact agreement and generally accepted accounting principles. MCC also requires Quarterly Disbursement Requests and Reports from the MCAs, which describe the funds used in the past quarter and the estimated expenses requiring MCC funding for the next quarter and beyond. After reviewing and approving the quarterly disbursement requests, MCC authorizes disbursement of the funds for the next quarter. Prior to fiscal year 2008, MCC requested that the Department of the Treasury (Treasury) transfer compact funds into the MCA bank accounts (e.g., permitted bank accounts) for redisbursement by the MCAs to their contractors. According to MCC officials, in September 2008, MCC started using the Treasury’s Common Payment System to make direct payments to the MCA contractors. However, as shown in figure 4, countries still maintain permitted bank accounts that are used to redisburse funds received from MCC. Thus, the MCAs may continue to internally process and manage project or program payments through the use of the country’s permitted bank account, which receives funds during MCC’s quarterly disbursements. As part of the country’s proposal for compact funding, the MCAs identify their approach to procurement. Prior to May 2007, the guidelines used by the MCAs were negotiated and documented in a procurement agreement— a supplemental agreement to the compact. MCC currently uses standard procurement guidelines, based on World Bank guidelines, and requires their use in compacts, unless it specifically permits alternative procedures. MCC also has developed several guidance papers that assist countries in implementing the standard procurement guidelines. The MCAs may contract with a procurement agent to perform key procurement functions. Figure 5 summarizes MCC’s compact procurement process. The MCAs manage compact procurements, but MCC retains review authority at points in the process, including procurement planning, prequalification, bid evaluation, and proposed contract award. MCC’s guidelines require both MCA and MCC approvals at up to three levels: (1) the MCA procurement director, (2) the MCA governing body, and (3) MCC. The level of review depends on the value and method of the procurement. Higher-value procurements and those using less-competitive methods generally require more second- and third-level reviews and approvals. Before MCC published standardized guidelines, the Honduras, Georgia, and Cape Verde procurement agreements required the first approval to be by the MCA management, and the second approval to be by the MCC governing body, or in the case of Cape Verde, by a special Procurement Review Commission consisting of Cape Verde government officials. To conduct oversight of large infrastructure projects managed by the MCAs, MCC reviews key documents, such as bidding packages, contract documents, technical project requirements, and work plans. In general, MCC’s implementation process for infrastructure contracts and projects requires that the MCAs have individual project directors—for example, a roads director—who oversee outside implementing entities and project management consultants. MCC also requires the MCAs to engage the services of a project management firm or an implementing entity to help manage compact projects before receiving project funding. MCA independent construction supervisors conduct oversight of day-to-day construction and the activities of the construction contractors to ensure compliance with contract requirements. MCC’s Implementation Support Team (IST) and resident country director, aided by MCC’s own independent engineers, monitor progress of the construction works as managed by the MCAs and executed by their contractors. To report progress, the MCAs prepare quarterly reports to MCC. MCC’s deputy vice presidents hold quarterly country portfolio reviews during which the IST reports on implementation progress as well as issues and concerns. Figure 6 depicts the oversight, management, and contractual relationships between MCC, the MCA, and their contractors for infrastructure projects. The MCAs have made progress in implementing polices and procedures for managing their administrative and operating expenses. However, our review of these policies and procedures, as documented in each country’s FAP, found gaps in the design of the policies and procedures, which prevented the establishment of an adequate internal control structure. In addition, our tests of transactions at the three MCAs showed that processed financial transactions did not consistently comply with the MCAs’ established controls, resulting in transactions that lacked proper approvals and adequate documentation. During our review of the three MCAs we visited, we found that each entity had documented policies and procedures in their FAPs as required by MCC. However, travel and payroll policies in two of the three countries we visited were incomplete or did not address key procedures or controls. In addition, the FAPs in all three countries lacked policies and procedures related to disbursements for each MCA’s main project or program expenses. The lack of adequate and comprehensive policies contributes to internal control structures that increase the risk of fraud, waste, and abuse in MCC-funded projects. For example: Travel policies in Honduras and Cape Verde lacked key requirements for supporting documentation. For example, in Honduras, travel policy allowed employees to be paid for lodging and per diem for local or international travel prior to the trip but did not require staff to submit detailed documentation related to hotel or airline flight receipts upon return to document the completion of travel. In addition, the policy did not address certain key issues, such as a business class airfare policy. Similarly, Cape Verde’s travel policies and procedures did not require staff to submit documentation related to hotel expenditures. Furthermore, Cape Verde’s FAP authorized business class airfare for travel of 9 hours or more, regardless of whether stops were made overnight for business or personal reasons. Payroll policies in Honduras and Cape Verde did not include a requirement for staff to prepare individual time sheets or other documentation that could be used by direct supervisors to verify actual hours worked before payroll was processed. Although payroll at these countries is based on contracted salaries, we could not determine from the documentation available whether staff members had actually worked the compensated hours. The FAPs in all three countries lacked policies and procedures for authorizing and paying major program expenses, such as payment for road resettlement, investments, grants, and credit line transactions. These program expenses are often managed by an implementing entity and comprised some of the largest disbursements for the MCAs we reviewed. For example, in Honduras the credit line and grant disbursements totaled $9.9 million, or 29 percent of the total disbursement amount of $33.8 million for the period we reviewed. Specific program or project requirements could be found through extensive reviews of various agreements between the MCAs and their implementing entity or other external guidance. However, key controls related to the disbursement approval process and documentation requirements for these transactions were not documented in the countries’ FAPs. It is important to include the relevant controls for these activities in the FAP to ensure that the MCAs have an adequate structure in place to efficiently manage their projects and provide a central point of reference for all documentation and approval requirements. The lack of comprehensive policies and procedures at the MCAs is the result of limitations in the initial guidance that MCC provided to the three MCAs. MCC’s initial Guidance for Compact-Eligible Countries consisted primarily of outlining the responsibilities of MCC and the recipient government in matters related to financial management and provided general guidance about the foundation of the policies and procedures to be developed. For example, the guidance stated that (1) procedures must be in place to ensure that disbursements are executed in accordance with the compact or related documents, (2) records must be maintained that provide clear support of a transaction, and (3) procedures must incorporate the principle of segregation of duties and internal controls. However, this guidance did not contain examples of the policies and procedures that the MCAs could implement to ensure an adequate fiscal accountability structure. For example, more specific or detailed guidance in payroll, travel, and inventory controls would have assisted the MCAs in developing comprehensive policies and procedures. According to MCC, to help the MCAs comply with their responsibility for developing their FAPs, MCC’s fiscal accountability directors often worked hand-in-hand with the MCAs and fiscal agents while drafting their initial guidance. The directors also collaborated with colleagues who worked on other countries’ FAPs to help ensure that the major internal controls and critical FAP elements were addressed. To help address shortcomings in the FAPs, in November 2008, MCC developed a FAP template with suggested policies and procedures to help compact countries strengthen their FAPs. The FAP template provides suggested policies and procedures regarding segregation of duties and asset management, as well as examples of financial controls in areas such as travel and payroll. According to MCC officials, the template is designed to be a guidance document that provides examples of how controls could be structured for different expense types. For example, the template requires employees to submit time sheets for supervisory approval and travelers to submit hotel receipts for travel expenses. MCC management does not require compact countries to model their policies and procedures on the guidance provided in this template FAP or adopt its provisions because MCC delegates responsibility for implementing internal control to the countries’ accountable entity, which can tailor their FAPs to meet their needs. Rather, the MCC-developed FAP template serves as a reference point that can be used by compact countries when drafting their FAPs. For controls provided in the FAP and other MCA documents to be effective in preventing unauthorized or improper disbursements, the MCA management must ensure that control activities established in its policies and procedures are properly applied. However, our review of the MCAs’ compliance with established control activities in operational areas—such as travel, payroll, program- or project-related expenses, and inventory— identified instances where the MCAs did not consistently comply with established controls. These control deficiencies and inadequate monitoring of the MCAs’ implementing entities, increase the risk of fraud, waste, and abuse of MCC program funding. A random sample of travel disbursement transactions for each of the three MCAs we reviewed showed instances in which management failed to consistently comply with the controls described in the FAP’s travel policies and procedures, which resulted in improperly documented or approved travel disbursements. For example, trip reports, which the MCA management requires as evidence of travel completion, were not always provided in Honduras and Cape Verde. Specifically, all of the 33 prepaid travel disbursements we tested in Honduras lacked such supporting documentation. In Cape Verde—which also requires other supporting documentation, such as boarding passes—we found that 19 of the 30 travel transactions we tested lacked documentation to support trip completion. Therefore, we could not determine whether the trips were completed or complied with the applicable authorization for these transactions. Travel policies for MCA-Honduras and MCA-Georgia require employees to obtain travel authorizations and provide receipts upon completion of travel for reimbursement transactions. In Honduras, for 9 of the 22 travel reimbursements we tested, the supporting documentation lacked certain required documents, such as boarding passes and hotel receipts. Three of the 22 travel reimbursements were made even though the travel authorizations did not have all of the required information. For these 3 transactions, documentation showed management approval, even though all required trip details were not properly documented. For MCA-Georgia, 8 of the 35 travel reimbursements we tested did not have certain documentation, such as hotel receipts or boarding passes, as evidence that the trips had taken place. Our MCA-Georgia sample also included one travel disbursement for a board member of the Georgia Regional Development Fund (GRDF), which did not reflect a reasonable effort to minimize costs charged to the investment fund. According to the GRDF travel guidelines, board members can travel in business class if the total length of the flights— including layovers, but excluding stopovers—exceeds 14 hours. According to the policy, a board member may add the time of a flight before a stopover to the time of a flight after a stopover to determine the flight’s total length. During our testing, we identified a transaction in which a board member booked two round-trip tickets for a board meeting in Tbilisi—one from his residence in Washington, D.C., to London, where he also has a residence, and one from London to Tbilisi. The trip included a 36-day stopover in London after the board meeting, but before the board member traveled back to Washington, D.C. The ticket from London to Tbilisi, a 5-hour flight, was booked in premium class, and the total cost of the ticket was $3,640, justified by the 14-hour exception. However, a 36-day stopover in London should make the traveler ineligible for business class travel under the 14-hour rule. Although this travel was made in accordance with the GRDF guidelines that we have previously mentioned, it did not reflect a reasonable effort to minimize anticipated costs to the investment fund. Our review of the implementation of payroll controls, using a random sample of disbursements, identified several instances in which payroll disbursements were made without adequate documentation or approvals as required in the FAPs. Our testing in Georgia determined that 4 of the 62 payroll transactions we tested lacked direct supervisor approval on the time sheets, and 59 of 62 transactions lacked the approval and certification of the human resources manager, as required by MCA-Georgia’s FAP. For Cape Verde, we were unable to trace disbursements to the contracted salary amount for 4 of our 15 sample items because employee files were not always updated to reflect annual cost-of-living increases. Program- and project-related expenses include payments disbursed by the MCAs for grant expenses, resettlement expenses, investment funds, and other operating expenses. Many program- and project-related expenses are often managed by an implementing entity hired by the MCAs to oversee project implementation. During our testing, we identified issues related to incomplete documentation and a lack of management approval of these expenses. The lack of adequate management reviews and a poor control environment in these areas resulted in unsupported and questionable costs related to disbursements at the three countries we tested. Grant expenses. During our testing of grants made by the MCAs, we observed payments to beneficiaries that lacked adequate evidence that certain prerequisites were met. We also identified inconsistencies in the documentation provided as support for the transactions. During our testing in Honduras, we identified 20 instances from a random sample of 53 grant disbursements where the forms provided to the MCA as evidence of receipt by the beneficiary of the agricultural equipment items were not signed by the beneficiary or were signed by the contractor responsible for delivering the goods to the beneficiary. Thus, we could not determine whether the beneficiary had received the goods. In Georgia, our testing identified 7 instances from a random sample of 54 grant disbursements in which beneficiaries did not certify, as required by the grant agreements, that certain milestones were met before they received funds. Furthermore, several of the samples we tested had different beneficiary signatures on the payment request form and the grant agreement documents. The most recent audit of MCA-Georgia, performed by their independent audit firm, also identified significant shortcomings in the supporting documentation for grant disbursements. Resettlement expenses. Resettlement payments compensate landowners for property used for the MCA projects, such as road and pipeline construction. During our testing of resettlement disbursements in Honduras, we identified transactions that lacked the documentation required to support the disbursement amount and the recipient’s eligibility to receive the funds. In 6 of the 25 transactions we selected in Honduras, the files had inadequate documentation to provide evidence that the beneficiary had received funds and did not include the beneficiary’s signature. In some cases, the beneficiary’s signature was not the same as that on other documents in the file. Several files had different signatures on documents that (1) evidenced acceptance of the resettlement offer by the beneficiary and (2) acknowledged that the beneficiary received the funds from MCA-Honduras. These control deficiencies occurred due to the absence of an MCA policy requiring confirmation of these signatures. According to MCA-Honduras officials, in some cases the officials were familiar with the beneficiaries and with those who had signed for them. Investment funds. MCA-Georgia established a fund that made investments in businesses that met certain criteria to further their development. The GRDF management agreement describes processes, such as board authorizations, investment fund goals, and documentation requirements, that should be met before investment payments are requested and approved. Our review of 5 investment transfers, totaling $3.7 million, showed that 2 transactions were processed without adequate documentation of the required board approvals. Also, 3 of the 5 investments did not have fully completed investment proposals before approval by the GRDF Board of Directors. Furthermore, two of the five payment requests made to the MCA fiscal agent lacked supporting documentation and required follow-up to ensure that GRDF personnel had provided the required documentation. In its semiannual audit covering the last 6 months of 2008, the MCA-Georgia auditor also reported that transactions related to the GRDF investment fund lacked adequate documentation. Other operating expenses. Other operating expenses include MCA disbursements for technical services, construction services, and office- related expenses. During our testing of these expenses, we found instances of inadequate documentation and approvals. For example, in Honduras, we identified 23 of 58 operating expense transactions that did not have the required supporting documentation, such as a Certificate of Delivery of Goods Report. As a result, for these items, there was no evidence that the goods or services were provided before the invoices were processed and payments were made to the contractors. We were able to verify the existence of 17 of the 23 items that did not have a certificate in the financial files. However, we were unable to verify the existence of the remaining 6 items. In Georgia, MCA procurement officials had not properly approved 3 of the 58 transactions we tested. In addition, 8 of the 58 transactions were not supported by adequate documentation. For these 8 transactions, we could not determine whether payments were made in accordance with the applicable contracts because the invoices were insufficiently detailed. For example, one invoice requested payment for “fourth quarter . . . under services agreement,” with no additional information provided. In addition, in its June 2009 report, the MCA-Georgia auditor reported $1.2 million in questioned costs due to a similar lack of supporting documentation for one road construction project. The auditor also noted significant shortcomings in the supporting documentation for interim payment applications of the civil works performed by the contractor. In Cape Verde, we found that, in 4 of the 37 technical services transactions tested, amounts disbursed to a contractor did not agree with the provisions of the applicable contract. For these transactions, 4 payments were made on one contract that did not have a payment schedule that listed the deliverables to be provided for MCA-Cape Verde to initiate payment. As a result, we could not determine whether the correct amounts were paid for services rendered for the invoices we examined. Our testing of a random selection of assets included in inventory identified several instances where documentation was not in compliance with inventory policy and procedures, as required in the FAPs. As a result, we could not always determine whether the items provided were the same as the items in the asset listing. For example, of the 39 inventory items we tested in Georgia, we were unable to determine whether 15 of these items were the same as the items described in the inventory list due to poor recordkeeping, such as incomplete asset information, lack of asset tags, and inadequate serial number tracking. These 15 items included 7 computers and 1 cell phone. MCA-Georgia auditors also reported that inventory and asset management was a problem in their semiannual financial audits citing shortcomings in recordkeeping, tagging of assets, and inaccurate or incomplete recording of asset movements and changes in custody. Auditors recommended that the MCA fully implement the asset management procedures described in MCA-Georgia’s Asset Management Manual. In addition, MCA-Georgia and MCA-Cape Verde reported instances of lost or stolen inventory items, such as laptops or other electronic equipment, indicating the need for improved property safeguarding controls. The MCA-Georgia fiscal agent stated it could not identify 16 items in its last MCA-wide inventory process in December 2008. Among the 16 items were 4 computers and 4 cell phones. Subsequent to our visit, the MCA fiscal agent performed another inventory count in May 2009 and located some of missing items. Furthermore, MCA-Cape Verde officials stated that after-hours thefts had resulted in a number of missing laptops and a projector, which were still missing at the time of our fieldwork in May 2009. MCC has increased standardization of the MCA procurement guidelines, which were initially determined on a country-by-country basis. In the most recent version of its procurement guidelines, released in July 2008, MCC reduced the number of approvals required from MCC and the MCA while at the same time requiring postprocurement reviews to supplement MCC oversight. The MCAs we assessed generally adhered to MCC’s procurement guidelines, although they did not fully comply with some requirements, such as contractor eligibility and price reasonableness determinations. In addition, we found that when the MCAs delegated procurement responsibility to outside entities, the procedures used by these entities were generally consistent with MCC’s procurement framework. MCC has increased standardization of the MCA procurement guidelines. In its initial compact country procurement agreements, MCC permitted countries to select their own procurement guidelines but reviewed them to determine whether they met MCC requirements for open, fair, and competitive procurement. In May 2007, MCC issued standardized procurement guidelines to simplify country processes, according to MCC officials. MCC reviews and now requires their use in all new compacts. MCC officials also said that using a standardized procurement framework encourages more firms to bid on MCA procurements, because they become familiar with MCC requirements and do not have to adjust to new ones for different MCAs. MCAs in each of the three countries we examined have modified the procurement framework they used while implementing compacts. The MCAs in Honduras, Georgia, and Cape Verde all began their compacts using country-specific procurement guidelines. MCC officials told us that Honduras and Georgia switched to MCC’s standard guidelines in May 2007 and August 2008, respectively. MCA-Cape Verde has continued to use its own procurement guidelines because most of their large procurements were complete, according to MCC and MCA-Cape Verde officials. According to MCC officials, MCC’s initial level of involvement in procurement development and review was unsustainable, especially as MCC’s compact portfolio grew. According to MCC officials, they were getting “bogged down” looking at smaller procurements, and they concluded that the MCAs’ governing bodies were likewise required to review too much detail within individual procurements. MCA country officials with whom we spoke also stated that the initial review process delayed procurement and thus the project schedule. For example, as early as 2006, MCA-Cape Verde was concerned about the mismatch between the number of reviews required of the Procurement Review Commission and the time frame of projects. In the most recent version of its procurement guidelines, released in July 2008, MCC introduced the “Implementation Model Framework” as the standard procurement model for all compact countries and reduced the number of required approvals by MCC. This model formalizes the extent to which MCC is involved in procurements and further reduces the number of points at which MCC approvals are required. Although the MCAs’ procurement procedures do not change, for those countries transitioning to this model, MCC plays more of an oversight role. MCC’s July 2008 version of the procurement guidelines also establishes a 2-tier system of approvals that allows for even fewer reviews of procurement actions for those countries with a good procurement record. Schedule A of the 2-tier system represents the initial level of review for most countries, which is referred to as implementation support. As countries gain experience and MCC gains confidence that they are implementing MCC procurement guidelines, MCC permits the country to transition to Schedule B, which is referred to as oversight. See appendix II for a discussion of the oversight model and a comparison of the review required under Schedule A and Schedule B to the review required in previous procurement guidelines. When it reduced the number of required pre-approvals, MCC also formalized a separate postprocurement review process to supplement its oversight of the MCA procurements. In July 2008, MCC began to conduct yearly interim activity reviews (IAR) of compact countries. The IARs assess a nongeneralizable random sample of procurements from each country for compliance with procurement and contract administration processes. As of August 2009, MCC had conducted IARs of eight compact countries. In the three IARs for the countries we examined, MCC officials reported that the procurement files were in “excellent,” “good,” and “acceptable” condition. These three IARs reviewed a total of 29 procurements. In the case of Cape Verde, critical issues identified by the IARs included failure to create a Procurement Implementation Plan and to conduct price reasonableness analyses. During our fieldwork, we discussed the IAR findings with the procurement director in Cape Verde, who reported that the MCA was addressing the issues identified in the IAR and provided documentation of additional processes. MCC guidelines for audits of accountable entities require that the MCA auditors assess and report on procurement compliance. According to the guidelines, the audit’s specific objectives should include testing compliance with the procurement agreement, procurement guidelines, and the FAP. We reviewed 7 audit reports for Georgia, Honduras, and Cape Verde and found no reporting of material procurement-related findings, although some audit reports did not clearly state that they included procurement within the scope of the audit. In all, we reviewed 24 audit reports for MCA countries—3 reports had procurement-related findings. One of these 3 audit reports had seven findings, another had five, and the last had one. The other 21 audit reports did not contain any reporting related to procurement. MCAs we assessed generally adhered to MCC Procurement Guidelines but have not documented that they fully complied with some requirements. On the basis of our review of a stratified random sample of 138 procurement files, we estimate that the three MCAs we reviewed obtained almost all of the required approvals from MCC in the procurement process, and that they obtained approvals from the MCA governing body in most cases. We also estimate a high rate of MCA compliance with MCC procurement requirements for using a competitive bidding process to conduct procurements; advertising procurements and preparing bid documents; using MCC procedures for opening bid documents, documenting the reasons for disqualified bids, and selecting the winning bidder; and documenting receipt of the good or service procured. Table 1 provides additional details on the procurement requirements we tested and our estimated results. Appendix III provides more information on the specific findings for the procurement criteria we tested. Despite general compliance with MCC procurement guidelines, the MCAs did not document contractor eligibility and evaluation panel impartiality in all cases, as follows: Contractor eligibility: In Georgia, 25 percent of procurements in fiscal year 2008 documented contractor eligibility. In addition, we estimate that MCA-Honduras documented contractor eligibility in about 74 percent of the procurements it conducted in fiscal year 2008. MCC requires that the MCAs conduct contractor eligibility reviews for all procurements. Parties to be excluded from MCC contracts include firms declared ineligible under World Bank anticorruption policies and U.S. antiterrorist policies. MCC has taken steps to improve eligibility verification and documentation by issuing guidance for contractor eligibility in February 2008. MCC’s guidance was prompted by a U.S. Agency for International Development, Office of Inspector General, assessment of procurement that found that MCAs had not fully complied with guidance on determining contractor eligibility. Impartiality of the evaluation panel: We found that all three MCAs we reviewed documented the impartiality of the bid evaluation review panel less than 90 percent of the time. For example, we estimate that MCA-Cape Verde documented the impartiality of the technical evaluation panel for 74 percent of all procurements in fiscal year 2008, and that MCA-Honduras documented technical evaluation panel impartiality for 80 percent of procurements. Our review of all procurements in Georgia in fiscal year 2008 found that MCA-Georgia documented impartiality of the technical evaluation panel 86 percent of the time. Although MCA compliance was below 90 percent for evaluation panel impartiality, the margin of error on our estimates for MCA-Honduras and MCA-Cape Verde may bring them close to 90 percent compliance. Additionally, we found that the MCAs we reviewed did not consistently document their evaluation of the reasonableness of prices contained in the winning bid. MCC guidance states that the MCAs should conduct and document price reasonableness analysis for all procurements to ensure that no more than a commercially reasonable price is paid to procure goods, works, and services. While MCC guidance states that competitive bids or bids close to the budget, among other criteria, may be used to identify a price as reasonable, MCC’s procurement directors generally did not document this determination in their files or the evaluation reports. MCA procurement directors believed that they did not need to document price reasonableness if they received multiple competitive bids for a procurement or if bids were within the planned budget. An MCC review of an MCA-Cape Verde procurement, conducted in February 2009, also found that the MCA had not conducted a price reasonableness analysis. When the MCAs delegated procurement responsibility, procurement procedures used by the outside entities were generally consistent with MCC procurement principles and guidance. For all three MCAs we visited, procurements were generally conducted by the MCA procurement staff or its contracted procurement agent. We found examples of instances where the MCA had used alternate guidelines or delegated procurement responsibility to an outside entity. For procurement of small works, MCA-Cape Verde used procurement guidelines developed by the Cape Verde Ministry of Industry and Transport (MIT) that did not use the same standard for price reasonableness as MCC. These differing standards led MIT to automatically discard bids that it considered “unreasonably low” but that would have been evaluated under MCC guidelines. In Cape Verde, procurements for road and bridge construction began before the compact entered-into-force. Although MCC reviewed and accepted the results of these MIT procurements, the MCA file did not contain a full record of the procurement procedure for the $3.4-million bridge procurement. MCA-Georgia used two outside entities to conduct procurements. Procurement responsibility for Regional Infrastructure Development projects was delegated to the Municipal Development Fund (MDF) of Georgia, and procurement responsibility for most procurements conducted for Agricultural Development Activity grant programs was given to the Citizens Network for Foreign Affairs (CNFA), the nonprofit organization managing the grant program. In the case of MDF, a March 2006 collaboration agreement between MDF and MCC lays out the procurement procedures that MDF is required to follow in conducting procurements financed entirely or in part by MCC. These procurement guidelines have the same requirements as the March 2006 MCA-Georgia procurement guidelines. However, MCC issued updated procurement guidance in 2007 and 2008 and did not modify the collaboration agreement to encompass these new requirements. For example, we found that MDF procurements did not meet the requirements for advertising and contractor eligibility that MCC issued in 2007 and 2008. In the case of grants administered by CNFA, we found that MCA-Georgia has created a separate procurement process. CNFA relies on grantees to identify suppliers for goods and to provide price quotes from multiple suppliers showing that their chosen supplier has the lowest price. According to CNFA officials, CNFA staff check with the identified suppliers to verify that the prices provided by grantees to CNFA are accurately reported. However, CNFA staff do not conduct independent market research to ensure that the price estimates provided by grantees are reasonable and comparable with market prices. According to MCA- Georgia, grant recipients often live in rural areas and need to procure secondhand equipment, and thus they are often best equipped to identify existing suppliers. Project status reports of MCC and MCA consultants indicate that the MCA projects have encountered problems, which include delays, scope reductions, and cost increases. These problems are due, in part, to insufficient planning, escalation of construction costs, and insufficient MCC review. MCC is conducting oversight during implementation by monitoring project performance, establishing incentives for accountability, and using cross-functional teams to oversee and support the projects. On the basis of our review of contractor reports in the three countries we assessed, we found that MCC-funded infrastructure projects were substantially delayed. For example: After receiving initial contractor bids in excess of the planned budget, MCA-Georgia restructured what it had planned to award as three large contracts for the road projects into six smaller planned contract lots, leading to a delay of at least 6 months to rebid and award the contracts. Under the second procurement, MCA-Georgia was able to award contract lots 2, 3, and 4 and parts of lots 5 and 6—rather than six full lots—within the available project budget. At the time of our site visit in March 2009, the road construction contractors were behind 3 to 4 months on a schedule of 24 to 30 months in part due to contractor delays in getting labor, equipment, and field offices operational. One contractor also experienced delays due to issues related to the need to revise the designs, delayed preparation of construction working drawings, and slow coordination of the utility relocations. In July 2009, subsequent to our visit to Georgia, MCA-Georgia removed one roadway lot from one of the contractor’s contracts and awarded it to another contractor. The action was taken following MCA-Georgia’s assessment that one contractor’s performance was unacceptable based on the contractor’s failure to make sufficient progress on the road. MCA-Georgia awarded that roadway lot to another contractor in an attempt to complete the road projects within the compact time frame. Delays of up to 9 months occurred in constructing approximately 100 kilometers of the CA-5 highway project in Honduras. The delays were in part due to the MCA having to contract for additional topographic surveys that were needed to update the designs, revising designs to add additional travel lanes and road intersections, realigning the road to minimize property resettlement requirements, and addressing contractor performance issues. At the time of our visit in December 2008, the construction contractor for two sections of the road was about 3 months behind schedule on contracts of 24 months in duration due to slow progress during the rainy season. In Cape Verde, phase I of the port project was delayed 9 months. In addition, the construction contractor for the roads project was granted an 11-month extension on a 30-month contract. The Cape Verde bridges project was extended from 12 to 30 months. The reasons for delays varied across the three projects and included in some instances procurement delays, the inability of the MCA to provide site access for the contractor to begin work, and having to improve designs that were not ready for implementation. Our review of contractor reports indicated that these MCAs reduced the scope of projects, including the following: MCA-Georgia reduced the original compact scope for the award of 245 kilometers of road construction contracts to just over 170 kilometers because the full scope of the contracts could not be awarded within the initial compact budget. In Honduras, MCC was no longer exclusively funding the construction of the CA-5 highway project as planned under the compact. One of the four road sections of the CA-5 highway could not be awarded within the funding available through the compact, nor could construction be completed within the 5-year window. As a result, the scope of the roadwork as funded by MCC was reduced and, at the time of our review, compact funding covered the cost of approximately 65 kilometers along portions of three sections of road. The section not funded by MCC was being funded through a loan from the Central American Bank for Economic Integration to the government of Honduras. Two of five roads in Cape Verde were eliminated from the contractor’s project scope due to increased costs. In addition, the construction of phase II of the MCA-Cape Verde port project could no longer be funded under the available compact budget and, at the time of our review, was on hold until outside donor assistance could be used. Examples of MCA contract cost increases in the three countries we reviewed include the following: In MCA-Georgia, the independent construction supervisor estimated that the final contract price for one road contract, originally awarded at $65.0 million, would rise by 15 percent, or nearly $10.0 million; another contract, originally awarded at $33.1 million, would rise by nearly 18 percent or about $6.0 million. Changes in contract costs totaling about $2.0 million—an approximately 17 percent increase—were approved on the Cape Verde roads project, which was originally awarded at about $11.0 million. Contract cost changes on the Cape Verde bridges contract have been approved for a total of about $750,000—approximately 23 percent—on a contract initially awarded at $3.3 million. Our review of the three MCAs found that projects had to be redesigned and restructured due to insufficient planning before implementation, which led to delays in implementation. Our past work found that it is critical to set appropriate time frames to conduct planning, design, and construction activities. Insufficient planning. Insufficiently developed project designs led to redesign and delays in contract award and implementation in each of the three compact countries. For example, in five of the six projects we reviewed (six contracts for roads projects in each of the three countries, and two contracts for the port and bridges projects in Cape Verde), we found that insufficient planning—principally due to poor topographic surveys—led to inadequate designs. The redesign of projects delayed the bid process while designs were revised and, in other cases, resulted in significant modification of designs after contract award. Industry experts have found that actual costs for projects with limited planning can range from 20 to 30 percent higher than estimated. The following examples highlight some of the problems our review found that were reported by MCC, the MCAs, and their consultants: MCA-Georgia issued contract variation orders to address identified shortcomings in the project design. After beginning construction, the contractor found discrepancies between the design and the existing roadway conditions. This discrepancy required additional topographic engineering surveys, more-developed designs, and additional construction work. In addition, trees between 8.0 and 15.9 centimeters are legally required under Georgian law to be cut rather than uprooted with heavy equipment, but this requirement was not identified as a payable item in contract documents. Furthermore, the extent of the work required to relocate utilities was not sufficiently addressed in the design, according to the contractor. MCC officials noted that MCA-Georgia had contracted separately for a utility relocation survey that proved to be deficient. MCA-Honduras had to undertake additional topographic engineering surveys because earlier surveys were not sufficiently detailed or were unavailable. This issue contributed to a 4½-month delay in awarding contracts. In addition, fundamental planning decisions, such as adding travel lanes, interchanges, and safety features, were still under review during the design stage, which took time to resolve and resulted in significant changes in scope. MCA-Cape Verde, after construction award, found that road designs, accepted by the government of Cape Verde, were of poor quality and inadequate as a basis for construction. The topographic information in the design was inadequate, and thus the designs inaccurately represented the extent of the work required. MCA-Cape Verde found that bridge designs had to be revised after the award of the construction contract because the initial designs were not sufficiently adequate for construction. MCA-Cape Verde’s port project faced potential delays due to differences between the actual topographic and seafloor conditions and the conditions represented in the design drawings for required shore protection and the coastal road serving the port. Cost escalation. We found that cost escalation of construction materials and schedule delays associated with project redesigns also contributed to the need to restructure projects. For example, three road projects (six contracts) and the port project experienced cost escalation of construction material prices—especially oil, which heavily affects roadway construction costs. In its oversight role, MCC is not directly responsible for the development of cost estimates. However, our review of MCC Standards of Clearance indicated that MCC has a role in ensuring that the MCAs properly update—to include adjusting for the escalation of construction costs—and revalidate cost estimates before contract solicitation and throughout the project life cycle. Although MCC officials stated that the MCC project teams are knowledgeable about the MCA cost estimates and schedules, MCC does not have a formal policy governing their development and review and does not centrally track updates over a project’s life cycle. In addition, MCC does not issue guidance to the MCAs on assessing the extent to which cost escalation should be considered a risk factor and assessing its potential impact on planning, design, and construction schedules. MCC requires the MCAs to include an “owner’s contingency” in project cost estimates to cover unforeseen conditions and risks, such as cost escalation. MCC also reviews cost estimates of MCA projects quarterly as part of the disbursement request review process. Evidence we reviewed suggested that the MCAs’ initial cost estimates were not realigned when project scopes were revised or as prevailing market conditions changed. For example, the budget and cost estimates supporting the first MCA-Georgia road procurement—canceled because bids exceeded available project funding—were largely based on planning estimates that were 2 years old. In addition, the estimates did not sufficiently account for (1) cost escalation, (2) changes in scope and standards that occurred after the feasibility study, (3) weakening of the U.S. dollar, and (4) an increase in construction work worldwide that resulted in less competition. Design review. We found that MCC consultants’ reviews of designs before award of contract were insufficient. For example, one of MCC’s consultants characterized its design review as “big picture in nature” and “not to be considered a detailed review,” stating specifically that “building drawings were not completed and not reviewed,” and that “the cost estimate was not reviewed.” In contrast, our review of industry leading practices indicates that a well-organized, detailed review can ensure that design plans and specifications are sufficient for construction and will provide the contractor with sufficient information to prepare a competitive and cost-effective bid. MCC has taken some steps to modify its compact development process by increasing its assistance to support MCA planning for projects before implementation. Previously, final feasibility studies, environmental assessments, and detailed project planning were typically not completed until after entry-into-force. Under the new process, that type of planning is more likely to be completed before entry-into-force. See figures 7 and 8, which show MCC’s prior and current compact development and implementation processes. MCC officials stated that they are making greater use of MCC 609(g) funding authority and Compact Implementation Funds to support these activities earlier in the process for more recent compacts. MCC also noted that it expects to make greater use of Compact Implementation Funds to assist the countries in preparing their procurement processes and begin final project design in cases when planning feasibility studies are completed. In all nine contracts that we examined—the three road projects (six contracts), the port project, the bridges project, and the pipeline project— we found that MCC’s Implementation Support Team (IST) conducts oversight and monitors project performance during compact implementation. We also found that MCC has a resident country director (RCD) in each compact country. The RCD monitors MCA management and project implementation as MCC’s representative to the government in the compact country and at the board meetings of accountable entities. The RCD is not a voting member, but provides oversight over MCA decisions about contract awards and contract changes affecting cost, schedule, and scope. MCC requires the MCAs to prepare implementation plans that include program and project work plans and uses independent consultants to monitor MCA reporting status against those plans. MCC also reviews key documents, such as bidding packages, contract documents, and technical project requirements. We found that MCC staff in-country and in Washington, D.C., visit projects firsthand to confirm MCA reporting and assessment of status of projects. MCC officials stated that, to integrate oversight efforts, they schedule consultant site visits to coincide with those of headquarters staff, to the extent possible. According to MCC officials, communication occurs daily between the RCD, the deputy RCD, their counterparts in the MCAs, and the MCAs’ individual project directors. The MCAs prepare quarterly progress reports for MCC. The RCDs discuss project performance—usually weekly—with the MCAs, including discussions about scope, cost, schedule, and other project-related issues. The RCD’s monitoring is reported informally to MCC headquarters on an ongoing basis and formally in quarterly country portfolio reviews with MCC’s deputy vice presidents. During those reviews, the IST also reports on implementation progress. Under the compact model of country ownership, MCC does not have the authority to direct MCA contractors that implement MCA projects but works with the MCAs, which direct contractors to take corrective actions. MCC, through provisions in the compacts and MCC Program Procurement Guidelines—and outlined in MCC Standards of Clearance—has the right to review and approve MCA projects and contract documents and may direct the MCAs to ensure that (1) appropriate design standards and specifications are used, (2) schedules and cost estimates are prepared, (3) environmental and social assessments are made and incorporated into projects’ scopes, and (4) changes to contracts that increase the value by 10 percent or more are justified. In Georgia, MCC’s engineers raised significant design and environmental concerns about the Naniani landslide site and potential risk to the pipeline project at the site. MCC’s engineers reported that the existing pipeline ruptured in December 2006 due to a landslide, and that a reoccurrence could damage the MCC-funded repairs. MCC’s consultant reviewed the geotechnical information and recommended rerouting the pipeline and that it be included in the project scope of the Georgian Oil and Gas Corporation. The recommendation was accepted and incorporated, and MCC and its consultant continued to monitor the project, conducting a follow-up inspection of the site in July 2008. MCC’s consultant reported that the pipeline was completed and at a location far better than the original location. The rerouting of the pipeline is shown in figure 9. MCC’s compact framework sets out conditions that the MCAs must meet before receiving funds that act as incentives for establishing accountable organizational structures to implement country compacts. We found that, in all nine projects in the three countries in our review, MCC required that the MCAs engage the services of a project manager or an implementing entity to help the MCAs manage the infrastructure projects outlined in their compacts before they received project funding. In some instances, the MCAs contracted with a commercial project management consultant to act as a project manager on the MCAs’ behalf. In other instances, the MCA entered into a formal agreement with another government entity that acted as the implementing entity. For example, for its road construction projects, MCA-Georgia contracted with an international project management firm. MCC also ensures that the MCAs have accountable individuals, to meet another condition for receiving project funding, to oversee the management of large infrastructure projects. MCC requires the MCAs to assign “project directors,” such as a roads director, to monitor implementing entities and outside project management contractors. In Georgia and Cape Verde—where the infrastructure projects reviewed included roads and bridges, a pipeline, and a port project—we found that MCC required the MCAs to have project directors for the different types of projects. In addition, we found that MCC required the MCAs to use independent construction supervisors to conduct oversight of day-to-day construction, including overseeing construction progress and the actions of the construction contractor to ensure compliance with contract requirements. In the case of the pipeline project in Georgia, the implementing entity acted as both the project manager and the independent construction supervisor. MCC also requires the MCAs to conduct oversight of their project management units and projects through an MCA supervisory board generally comprising high-level government officials and representatives of the business sector and civil society. The board places additional high- level oversight and accountability on the performance of the project management units, the projects, and MCA contractors. The supervisory board must be briefed on challenges that require changes to the project scope, contract cost, schedule, or contractor. MCC also works with the MCAs’ supervisory boards to restructure projects when needed to keep them within their budgets and compact time frames. The board is required to approve changes that the MCA project management unit proposes and decisions about hiring or replacing staff when performance and accountability issues warrant a change. In one of the three countries we reviewed, MCC took action when it had concerns about the effectiveness of the MCA’s top-level management officer and worked with the supervisory board to see that the compact country changed the leadership of the MCA’s project management unit. We found that MCC uses integrated cross-functional project teams— comprising headquarters’ IST and its independent engineering consultants—to provide technical expertise and operational support to MCC’s oversight and to the MCAs in implementing infrastructure projects. MCC headquarters personnel who support oversight include contracting, financial, legal, environmental, and engineering staff. MCC also has about 24 engineering and environmental consultants that it uses to support MCC project oversight reviews. On the basis of evidence contained in MCC’s independent engineers’ reports, MCC conducts reviews of project scope, cost, schedule, design and specifications, contractor performance, and environmental and safety issues. In cases where individuals must be moved and property acquired to accommodate projects, MCC also conducts reviews to ensure that the MCAs comply with MCC resettlement policies. In addition to conducting technical reviews of projects, MCC independent consultants also report on the performance of MCA project management consultants and construction supervisors in conducting effective project and construction management. MCC works in challenging and resource-poor countries and has provided them with ownership and flexibility in the ways they can meet MCC’s statutorily mandated requirement to ensure fiscal accountability and open, fair, and competitive procurements. While the MCAs we examined have made progress in implementing policies and procedures for financial management, some gaps remain. Without additional specificity from MCC in its financial guidance, the MCAs may continue to use inadequate policies and procedures that do not reflect best practices in their internal financial management and in monitoring the financial control activities of their implementing entities. In addition, although the MCAs generally adhered to MCC procurement requirements, absent their consistent adherence to guidance on conducting and documenting price reasonableness analysis, MCC will not be able to ensure that it receives the best value in procurements. Finally, MCC is conducting oversight and has taken steps to advance planning for infrastructure projects. However, the process changes MCC has made will not address problems caused by shortcomings in the designs that were not discovered until after contract award and by cost estimates that did not sufficiently account for cost escalation associated with project delays and construction prices. Planning should be completed earlier so that the MCAs have more time to conduct effective design reviews and independent cost reviews. Otherwise, MCC risks funding MCA projects that cannot be completed within the 5-year compact time frame and within the allotted compact budgets. Earlier project planning and design and cost reviews will likely add to the cost and time required for planning and design, but should result in better designs, help to control costs, and reduce the challenges encountered during implementation. To improve MCC’s financial controls, procurement practices, and contract management, we recommend that the Chief Executive Officer of the Millennium Challenge Corporation take the following five actions: 1. Revise MCC guidance to MCAs to require that MCA FAPs include comprehensive policies and procedures related to the MCAs’ financial transactions that are in accordance with best practices covering procedures such as authorizations, approvals, and key documentation of all transaction types. 2. Revise MCC guidance to MCAs to require that MCA FAPs incorporate policies and procedures related to disbursements of the MCAs’ primary project- or program-related expenses, including oversight procedures and responsibilities for MCA personnel in charge of monitoring and evaluating the implementing entities’ compliance with contract agreements. 3. Reinforce existing MCC guidance to MCAs on conducting and documenting price reasonableness analyses. 4. Establish a programmatic goal that MCAs conclude all project planning efforts—to include MCC final approvals of the MCAs’ final feasibility surveys, engineering surveys, environmental surveys, and resettlement studies—prior to entry-into-force, but not later than the point at which the MCAs issue contract solicitations. 5. Require MCAs to obtain detailed reviews of project cost estimates—to include the extent that risks to projects, such as cost escalation, schedule delays, and other issues, have been considered—and of project designs before contract solicitation for large construction projects to better ensure that projects can be successfully bid and built. We received written comments on a draft of this report from MCC. In commenting on the draft, MCC accepted GAO’s recommendations and provided additional comments on some of our findings. Regarding MCA financial controls, MCC accepted our recommendations and commented that some MCA travel and payroll policies did not require the documentation we looked for to verify expenses. However, without such documentation, we could not verify that travel actually occurred for the travel transactions or that employees worked the necessary number of hours for the payments made. Finally, MCC clarified certain aspects of the GRDF investment guidelines; accordingly, we have adjusted the report to reflect this clarification. Regarding procurement, MCC accepted our recommendation and stated that it had now incorporated its existing guidance on price reasonableness analyses and contractor eligibility into MCC Procurement Guidelines so that they carry the weight of MCC policy. Furthermore, MCC procurement directors have been directed to reject any evaluation reports received from an MCA that do not include these determinations. Regarding infrastructure planning and oversight, MCC stated that it accepted our recommendation that they conclude planning efforts prior to contract solicitation—ideally, prior to entry-into-force of the compact— and modified its processes beginning in fiscal year 2008 to require completion of feasibility studies and environmental assessments before compact signing. We are in the process of assessing the specific actions MCC has taken that address our findings. MCC also accepted our recommendation that they obtain detailed reviews of project designs and cost estimates but stated that it conducts a number of reviews in due diligence and prior to the release of design and bidding documents. While MCC conducts reviews, our assessment of the compacts we examined, all of which had significant design, cost, and schedule issues, indicates that the project review process can still be improved. For example, MCC could expand its reviews by soliciting specialized project management experience in risk analysis and scheduling. We have reprinted MCC’s comments, with our responses, in appendix IV. We also incorporated technical comments from MCC in our report where appropriate. We are sending copies of this report to interested congressional committees, the Chief Executive Officer of the Millennium Challenge Corporation, and other parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact David Gootnick at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The fiscal year 2008 Consolidated Appropriations Act, Public Law 110-161, mandated that GAO review the financial controls and procurement practices of the Millennium Challenge Corporation (MCC) and its accountable entities and the results achieved by its compacts. For the purpose of this initial engagement, we focused on financial controls and procurement practices for MCC compacts and on the development, implementation, and oversight of contracts and projects at MCC and its accountable entities. We assessed MCC’s overall framework for financial controls, procurement practices, and contract management through a detailed review of these areas at three MCC compact countries: Honduras, Georgia, and Cape Verde. While we cannot statistically project our findings to other countries on the basis of these three countries, we chose these countries because they totaled approximately 39 percent of MCC’s disbursements at the end of fiscal year 2008. Intervening political events in other MCC countries also affected the selection of countries. To determine whether MCC’s financial controls help ensure accountability over compact country funding, we obtained an understanding of MCC financial requirements imposed on the country when compact agreements were signed. MCC delegates much of the development and implementation of internal control procedures and the fiscal oversight of its federal funding to the country’s Millennium Challenge Account (MCA) accountable entity. As a result, we focused our work on policies and procedures at the three selected MCAs, including internal controls related to their financial transactions and on MCC’s oversight of this process. To assess the extent to which the MCAs had adequate policies and procedures for managing their operations effectively, we used MCC’s financial guidance and our Standards for Internal Control in the Federal Government. Specifically, we (1) obtained relevant policies and procedures as documented in the country Fiscal Accountability Plan (FAP) and determined whether the policies were comprehensive; (2) interviewed each MCA’s financial management staff to discuss additional control procedures not documented in the country FAP or other agreements; and (3) obtained additional documents, such as compact agreements or service contracts, to determine whether additional internal control information was included in these agreements. While our internal control standards for the federal government are not binding for the MCAs, they are a statement of best practices, and adherence to these standards provides reasonable assurance that fraud, waste, abuse, and mismanagement will be prevented or promptly detected. To determine the extent to which the MCAs were effectively implementing their internal controls as described in their FAPs’ policies and procedures or other agreements, we gained an understanding of each MCA’s overall financial management structure, and policies and processes by interviewing MCA officials. Specifically, we: Conducted walk-throughs and interviews with each MCA’s financial management officials to identify relevant policies and procedures, including key internal control activities for its financial transactions. Performed tests of those control activities that we considered key in providing reasonable assurance that transactions were correct and proper, including segregation of duties related to the approval and authorization of payments: dividing key duties and responsibilities among different people to reduce the risk of error or fraud, adequate supporting documentation: supporting the disbursements through documentation to provide a basis for reconciling payment amounts and authorizations to disbursement of funds, proper execution of transactions and events: authorizing and executing transactions by persons acting within the scope of their authority to ensure that only valid transactions are initiated and approved, and physical control over assets: securing assets and periodically counting and comparing totals with control records. We tested MCA transactions using data collection instruments (DCI) and criteria described in the MCA’s policies and procedures as documented in the MCA’s FAP or other documentation, such as project- or program- related contracts or agreements with third parties hired to manage or oversee implementation of the project activities. If transactions were not properly supported, we queried MCA officials to determine whether the required documentation could be located. To perform tests of internal controls included in the MCA’s policies and procedures, we selected stratified random samples of disbursement transactions for fiscal years 2007 and 2008. Given the variation in the programs and projects conducted by the three countries selected, we divided our work into strata that included operational expenses such as travel; payroll; and project-related expenses, such as credit lines, resettlements, grants, and investments. See table 2 for additional details on the number and dollar value of transactions tested. The MCAs often contract out the management or oversight of some program- or project-related activities to implementing entities that have more specialized knowledge or needed skills. For some of these implementing entities, we selected additional transactions and tested controls at their site to determine whether transactions were properly documented and to assess the MCAs’ oversight of those activities. We selected items from the country’s inventory list to test whether the MCA had established an adequate system to ensure physical control over assets. Disbursements for each country were randomly selected within each stratum to ensure an objective selection. Our initial methodology for transaction testing included the selection of a statistical sample of transactions at each MCA; however, as our countries changed, we found that inconsistencies between the countries’ financial management reporting systems did not always allow us to select an individual transaction to trace. For example, certain MCA systems processed transactions, such as payroll, in a batch process, and a payment selected from the database could be an entire monthly payroll, rather than a payment involving an individual. In addition, for some credit line and resettlement programs, the MCAs transferred large balances to credit institutions that would be divided and paid to specific recipients. In these cases, we selected additional transactions to test other key controls that could not be tested with the large transfers. Because of this limitation, we decided to use a random sample selection and to present results for the selected samples, rather than projecting to the entire population. We assessed the reliability of the financial data provided by the three countries we reviewed by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. To describe MCC’s procurement framework and its evolution, we reviewed MCC compact agreements, current and previous editions of MCC’s procurement guidelines, procurement agreements, procurement guidance papers, and implementation letters. We did not independently assess the adequacy of the World Bank procurement guidelines upon which MCC procurement guidelines are based. We assumed after discussion with internal GAO procurement experts that fully implementing World Bank/MCC guidelines would constitute open, fair, and competitive procurement. We also interviewed MCC officials in Washington, D.C., and compact countries, and MCA procurement officers and procurement agents in compact countries, to further our understanding of how MCC and the MCAs have managed and overseen procurement activities and to identify any issues with implementing MCC’s framework in practice. We analyzed current and previous editions of MCC guidance and agreements to identify how MCC’s requirements and procedures have evolved. We further reviewed MCC interim activity reviews and audits of the MCAs to document MCC’s post-review and audit processes. To assess the adherence of MCA compact countries to MCC’s procurement framework, we examined a stratified random sample of completed fiscal year 2008 MCA entity procurements for our three focus countries. As shown in table 3, we divided these procurements into the following four strata: (1) sole source procurements, (2) the five largest dollar value procurements, (3) procurements requiring MCC review, and (4) procurements that did not require MCC review. We identified the universe of procurements in each country using MCC’s Procurement Performance Report (PPR) for each country. To ensure that the PPR sufficiently reflected the procurements in each country, we interviewed staff at the MCAs and checked reported procurement dates and descriptions in the files against those reported in the PPRs. We found a high degree of accuracy in the data reported in the PPRs, which provided us with reasonable assurance that the PPRs were sufficiently reliable for the purposes of our analysis. We reviewed all sole source procurements in each country during fiscal year 2008 because of their high risk for abuse, as outlined in the U.S. Agency for International Development, Office of Inspector General’s Fraud Indicators. We also reviewed the five largest dollar value procurements in each country because of the dollars involved and their importance for compact implementation. We divided the remaining procurements into those with and without MCC review to determine whether MCC involvement in procurement changed the level of compliance with MCC’s guidelines. Because of the small number of procurements with MCC review in each country, we selected all of these procurements in our sample. In addition, because the number of procurements with MCC review was so small relative to those without MCC review in each country, we could not make a valid comparison between the two strata. In Cape Verde and Honduras, we selected a stratified random probability sample for each country large enough to generate percentage estimates with a margin of error of at most plus or minus 10 percentage points at the 95 percent confidence level. We selected 63 of the 105 procurements from Honduras and 47 of the 72 procurements from Cape Verde. With this probability sample, each procurement in the population had a known, nonzero probability of being included in the sample. Each procurement in the sample was subsequently weighted in the analysis to account statistically for all procurements in the population, including those that were not selected. All percentage estimates from these samples presented in this report have a margin of error of plus or minus 10 percent or less, unless otherwise noted. In Georgia, we reviewed all 28 procurements conducted in fiscal year 2008 because the relatively small number of procurements conducted in the country over that time period made sampling unnecessary. We examined the selected procurements using a DCI to determine whether procurements were conducted according to MCC’s procurement criteria. We assessed the MCA procurement process on compliance with MCC procurement guidelines, as outlined in table 4. We reviewed each file to assess whether it contained documentation that the MCA had followed the required procedures. If required documentation was not present in the file, we queried MCA officials to determine whether the required document could be located elsewhere. We did not, however, assess the quality of these required documents. In addition, our review included only a limited number of procurements that were completed following the introduction of MCC’s implementation model and Schedule B approvals matrix. Therefore, our findings do not assess the effectiveness of the implementation model. In addition to the statistical selection of the procurements for review using our DCI, we also judgmentally selected procurements whose reporting in the PPR exhibited potential indicators of fraud, such as multiple contract awards to a single entity, contracts awarded in multiple lots where awarding as one lot would have required additional reviews, and contracts awarded on a sole source basis. Some of these procurements were selected for inclusion in our DCI analysis; the remainder were assessed through interviews or document review to determine whether these potential fraud indicators could be explained by other circumstances. In addition, because canceled procurements represent lost time and effort spent in developing the procurement or contract, we identified canceled procurements in each country in fiscal year 2008. Although we did not do a formal review of these procurements using the DCI, we interviewed the MCA staff about these procurements to understand the reasons for cancellation. To assess the time frames required for MCC procurements, we identified key procurements in each of our three focus countries. We defined key procurements as those with a contract award amount greater than or equal to $1 million for goods, works, or consultant services. We then reviewed Procurement Implementation Plans for those procurements, where available, and compared the time frames anticipated in those plans with the actual procurement time frames provided to MCC in the PPR to determine the difference between planned and actual time frames. We further reviewed associated reporting documents and discussed these key procurements with the MCA procurement directors to determine the causes for any delays in these key procurements. To assess MCC’s development, implementation, and oversight of contracts, we examined the three infrastructure construction contracts with the largest dollar value and the largest consultant services contract associated with construction services in each of our three sample countries. We reviewed the following MCAs: Honduras: The Honduras compact called for the improvement of approximately 110 kilometers of the CA-5 highway comprising the “North Segment” (sections 3 and 4) and a “South Segment” (sections 1 and 2), both of which are located north of Tegucigalpa. We reviewed three MCC- funded road construction contracts associated with the CA-5 highway project in Honduras. The contracts are identified as roadway sections 2, 3, and 4, with contract awards of $48.4 million, $16.2 million, and $23.2 million, respectively. Other roadway sections of the highway are being improved by other funding sources. (See fig. 10.) Georgia: We reviewed one MCC-funded construction contract, awarded for 8.7 million Georgian Lari—valued at more than $6.2 million at the time—for phase II of the North-South Gas Pipeline Rehabilitation Project at nine sites along the pipeline. (See fig. 11.) We also reviewed two road construction contracts associated with the Samtskhe-Javakheti Roads Rehabilitation Project—for rehabilitation of approximately 171 kilometers of roads in the Samtskhe and Javakheti regions—that were awarded under what is identified as the “2nd procurement.” The first contract under that procurement was awarded in March 2008 for $65.0 million; the second was awarded in May 2008 for $33.1 million. An earlier procurement effort—identified as the “1st Procurement”—intended to award three contracts to rehabilitate 245 kilometers was canceled in June 2007 after contractor bids exceeded the available budget. When Georgia received an additional $100 million in compact funding, it allowed for a “3rd procurement” that enabled MCA- Georgia to award three additional road contracts, two in April 2009 and the third in June 2009, totaling about 46 kilometers. (See fig. 12.) Cape Verde: In the case of Cape Verde—which consists of 10 separate islands—we reviewed three contracts valued at more than $56.6 million to improve Cape Verde’s port, roads, and bridges. The contract for the phase I port project, to upgrade and expand the port of Praia on Santiago Island, was awarded for $42.3 million. The roads contract, to rehabilitate five roads on Santiago Island, was awarded for more than $11.0 million. Two of five roads were eliminated (identified as roads 3 and 5) from the contract scope due to cost increases. The contract for reconstruction of four bridges, on Santo Antão Island (not shown), was awarded for roughly $3.3 million. (See fig. 13.) We also examined MCC’s use of its independent engineers in supporting MCC’s oversight efforts related to the previously discussed infrastructure contracts and projects. To conduct our work, we reviewed project reports prepared by (1) the MCAs, (2) MCA implementing entities, (3) MCA project management consultants, (4) MCA independent construction supervisors, (5) MCA construction contractors, (6) MCC independent engineers, and (7) MCC. Those project reports generally report on project status, including scope, cost, schedule, engineering, environmental, and health and safety issues. To further understand and corroborate these reports, we interviewed MCC officials in Washington, D.C., and MCC resident country directors working in the compact countries. We also interviewed MCA management in the compact countries, MCA project management consultants, MCA independent construction supervisors, MCA design engineers, and MCA construction contractors. We compared MCC’s oversight with GAO’s Executive Guide: Leading Practices in Capital Decision-Making to assess MCC’s activities against best practices. Our assessment of planning, design, schedule, and cost status of projects was informed by our review of MCC and MCA reports and those of their contractors. Our evaluation of the sufficiency of MCC’s oversight documents was guided by lessons learned from past GAO work on infrastructure projects and industry best practices. Lastly, we made field visits to select projects in Honduras and Georgia to confirm some of the information reported within contractor progress reports. Because a recent U.S. Agency for International Development, Office of Inspector General, audit of Cape Verde compact implementation included field visits to projects, our findings for this objective did not rely on a site visit in Cape Verde. We conducted this performance audit from June 2008 to October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. MCC’s most recent version of its procurement guidelines, released in July 2008, established a 2-tier system of approvals that allows for less MCC review of procurement actions. Schedule A under these guidelines represents the initial level of review for most countries and is a reduction in review from all previous versions of the procurement guidelines. As countries gain experience and MCC gains confidence that they are implementing MCC procurement guidelines, MCC permits the country to transition to Schedule B. MCC uses professional judgment and its implementation model framework to identify countries that may graduate from Schedule A to Schedule B. Among other things, the framework addresses the MCA’s (1) capability and experience, (2) successful execution of previous procurements, (3) appropriate and qualified procurement advisors, and (4) maturity of the compact. Schedule A reduced the level of MCC approvals in 20 of 61 potential procurement actions, entirely eliminating MCC review in 9 cases. In 13 of 61 potential procurement actions, Schedule B reduced the level of MCC review below that of Schedule A—in all 13 cases by removing MCC review altogether. As of October 2008, five countries have transitioned to Schedule B. This appendix provides a more detailed breakout of the results of our procurement requirement testing contained in table 1 of this report. Following are GAO’s comments on the Millennium Challenge Corporation’s letter dated October 23, 2009. 1. In its comments, MCC pointed out that for the purposes of travel, MCA-Honduras and MCA-Cape Verde generally pay travelers a daily subsistence allowance for each day of the travel to cover travelers’ expenses related to lodging, meals, and incidental expenses. The allowance is calculated using specified per diem rates. As MCC stated, not every country requires the submission of receipts for such expenses. However, if the MCA does not receive such supporting documentation for travel expenses, the MCA has no proof that the travel actually occurred. As a case in point, even though Cape Verde’s FAP lists a number of supporting documents that travelers should provide after returning from a trip, we found that 19 of the 30 travel transactions we tested lacked such documentation. Therefore, we could not confirm that travel was completed. Furthermore, the required trip reports were not always provided in Honduras and Cape Verde to substantiate that the travel occurred. 2. MCC noted that the FAP for MCA-Honduras calls for a monthly payroll sheet to confirm payments to staff. However, we found that the information on the monthly payroll sheet only included personnel information, such as names and payment amounts. Without individual time sheets, we were unable to verify that employees worked the necessary number of hours for the payments made. We view this as a key control in the payroll process. Additionally, MCC developed a FAP template in November 2008 that provides examples of how controls could be structured. For example, the template requires employees to submit time sheets for supervisory approval. Taking actions to adopt these procedures would help ensure the propriety of these transactions. 3. MCC pointed out that the Georgia Regional Development Fund investments met guidelines that permit such investments. MCC also stated that one investee was not a subsidiary of a larger company that fell outside of the investment guidelines. Based on MCC’s comments, we reevaluated the evidence previously provided and agree with MCC regarding the subsidiary, but reconfirmed the principal place of business for the two companies as Tbilisi. We modified footnote 31 to reflect this information. Also, as stated in footnote 31, the fund manager used calculation methods that made it difficult to determine whether one of the businesses complied with the investment guidelines. Having clear supporting documentation and guidance are critical to ensuring that these investments, and the other areas we had concerns with, adhere to the guidelines. We appreciate that MCC stated it plans to work with the MCAs to strengthen compliance with documentation requirements where needed as part of a review of the FAPs. 4. MCC reports that, as of fiscal year 2008, it requires the completion of full feasibility and environmental assessments, including resettlement plans, before compact signature. We support MCC’s efforts to take action to finalize those project planning activities prior to compact signing and are in the process of assessing the specific actions MCC has taken to implement the recommendation. According to MCC, while it instituted the process change in fiscal year 2008, the revised process has thus far only been applied to the Senegal compact, signed in September 2009, and to due diligence for a proposed compact with Moldova. 5. We recognize that MCC is using its independent engineers, the MCA’s project consultants, and the compact country’s project stakeholders to review designs. However, our assessment of the compacts we reviewed, all of which had significant design issues, cost growth, and schedule delays, indicates that the project review process can still be improved before contract solicitation. For example, MCC could expand its review of final designs, cost estimates, and risk assumptions by soliciting services of a technical specialist with project management experience in project risk analysis and project scheduling. Based on the projects we reviewed and the problems we found, we believe that outside expertise would benefit the project review process and avoid the expense of addressing issues related to the lack of planning. In addition to the person named above, Emil Friberg, Jr. (Assistant Director), Mike Armes, John Bauckman, Lynn Cothern, Lucia DeMaio, Tim DiNapoli, Mattias Fenton, Jordan Hamory Holt, Elizabeth Martinez, Heather Rasmussen, Donell Ries, Michael Simon, Susan Tieh, Patrick Tobo, and Matt Wood made key contributions to this report. Also, Jehan Abdel-Gawad, Jim Ashley, C. Etana Finkler, Ernie Jackson, Amanda Miller, Charlotte Moore, Josh Ormond, and Jena Sinkfield provided technical assistance. | Established in January 2004 with a mission to reduce poverty through economic growth, the Millennium Challenge Corporation (MCC) has committed $6.9 billion for compacts with 19 developing countries. MCC vests compact management with accountable entities in recipient countries, called Millennium Challenge Accounts (MCA). MCAs, with guidance from MCC, allocate resources, oversee and implement a financial plan, approve expenditures and procurements, and implement compact projects. This report, directed by the fiscal year 2008 Consolidated Appropriations Act, assesses MCC and MCA (1) financial controls; (2) procurement practices; and (3) development, implementation, and oversight of contracts and projects. GAO focused on financial and procurement transactions and projects at MCAs in Honduras, Georgia, and Cape Verde, countries with high disbursement totals as of the end of fiscal year 2008. As required by MCC guidelines, each of the three MCAs GAO reviewed had developed a Fiscal Accountability Plan (FAP) that documented policies and procedures related to internal control, such as funds control, documentation, and segregation of duties. However, each of the FAPs GAO reviewed, in place as of the end of fiscal year 2008, had gaps in certain areas, such as incomplete policies and procedures for some expenses. Although MCC agreements require that each country prepare a FAP, the initial guidance MCC provided to the three MCAs was general and did not contain sufficient information to help the countries develop sound internal control structures. For example, guidance stated that records must support transactions and that procedures must incorporate segregation of duties. However, specific guidance on payroll, travel, and inventory controls would have helped the MCAs develop comprehensive policies. To address this, MCC developed a FAP template in November 2008, but MCC allows the MCAs flexibility and does not require them to implement the template's policies and procedures. In addition, GAO identified a significant number of the transactions tested that lacked adequate supporting documentation or were not properly approved by management. These deficiencies increase the risk of fraud, waste, and abuse of MCC program funding. MCC has increased standardization of the MCA procurement guidelines, which were initially developed on a country-by-country basis. The MCAs GAO assessed generally adhered to MCC's procurement guidelines. GAO found that, in some cases, MCAs did not document a price reasonableness analysis of winning bids. GAO also found that when MCAs delegated procurement responsibility to outside entities, the procedures used by these entities were generally consistent with MCC's procurement framework. MCC conducts oversight of MCA infrastructure contracts and projects, but insufficient planning of projects during compact development and cost escalation has undermined project implementation. As a result of insufficient planning, designs had to be revised, and project scopes have been reduced. Significant delays to project schedules--the result of undertaking additional planning and design--further compounded the escalation in construction costs experienced on projects and contributed to the restructuring of projects. For example, two of five planned roads in Cape Verde were eliminated, in part due to insufficient design and cost increases. In addition, the schedule for construction of the remaining three roads was extended by 11 months. MCC has worked with the MCAs to significantly restructure projects to keep them within their budgets and 5-year compact time frames. MCC also has taken steps to provide increased assistance to MCAs to help them conduct better planning for projects. However, these changes alone will not address the problems projects encountered with design development and cost escalation. Industry best practices and past GAO work have shown that conducting design reviews and updating cost estimates prior to contract solicitation help to ensure that projects can be successfully bid and constructed. |
The Institute of Medicine, chartered by the National Academy of Sciences, has defined practice guidelines as systematically developed statements that assist practitioners in making decisions about appropriate health care for specific clinical conditions. For example, guidelines are available on such topics as the length of hospital stay for maternity care, the need for back surgery, and the management of pediatric asthma. Guidelines are intended to help physicians and others by crystallizing the research in medical literature, evaluating the evidence, applying the collective judgment of experts, and making the information available in a usable form. They are more often written as acceptable therapy options than as standardized practices that dictate specific treatments. Unlike standards of care that have few accepted variations in appropriateness, most guidelines are expected to have some variations because improved outcomes are not necessarily linked by definitive scientific evidence. Where there is a lack of scientific evidence, some organizations make recommendations that reflect expert opinion, while others recommend tests or procedures only when convincing scientific evidence of benefit exists. Many public and private organizations have been developing guidelines for decades. About 75 organizations have developed over 2,000 guidelines to date. The federal government supports the development of clinical practice guidelines through AHCPR, the National Institutes of Health (NIH), the Centers for Disease Control and Prevention, and the U.S. Preventive Services Task Force (USPSTF). Private guideline efforts have been undertaken by physician organizations, such as the American Medical Association; medical specialty societies, such as the American College of Cardiology; private research organizations, such as RAND Corporation; and private associations, such as the American Heart Association. Guidelines are also developed commercially by private companies, such as Milliman and Robertson and Value Health Sciences, which market them to health care organizations. Given the multiplicity of sources for guideline development, it is not uncommon for more than one guideline to exist for the same medical condition or for recommendations to vary. For example, at least four organizations have issued a guideline on prostate cancer screening. In addition, guidelines tend to reflect the specialty orientation of the guideline developers. In the case of the prostate screening guideline, for example, the American Urological Association, the American College of Radiology, and the American Cancer Society recommend using a prostate-specific antigen test for all eligible patients aged 50 and older, whereas the USPSTF recommends against the routine use of this test. Recent national surveys indicate that a majority of managed care plans have adopted guidelines and made them available to providers. For example, a 1994 survey sponsored by the Physician Payment Review Commission found that 63 percent of managed care plans reported using formal written practice guidelines. The results also showed that the use of guidelines was least common among less structured managed care plans because of their more limited ability to influence physicians’ practice. Specifically, 76 percent of the responding health maintenance organizations reported using practice guidelines, compared with 28 percent of preferred provider organizations. Health plans we reviewed had three strong motives for adopting guidelines: pressure to moderate expenditures, to show a high performance level across key quality indicators when compared with other plans, and to comply with accreditation and regulatory requirements. These plans view practice guidelines as tools to achieve these ends by promoting greater uniformity within their own physician networks and by helping physicians increase their efficiency, improve clinical decision-making, and eliminate inappropriate procedures. In selecting aspects of physician practices that could be improved through the use of guidelines, most plans we spoke with identified those services or conditions that are high cost, high medical liability risk, and high incidence for their patient population. They reviewed the provision of such services as hospital inpatient, pharmacy, and ambulatory care—as well as variations in utilization across physicians—to identify such conditions. For example, one plan identified pediatric asthma as a condition for guideline adoption because it is among the most frequent causes of hospital admission and repeat emergency department visits. Human immunodeficiency virus (HIV) infection and high cholesterol are also among the plan’s top 10 topics for guideline selection. Several plans we contacted reported cost savings from implementing guidelines that specify the appropriate use of expensive services. In one case, a plan adopted a guideline for treating stroke patients that recommended physical therapy early in the patient’s hospital stay. This practice resulted in shortened stays as well as improved outcomes. Another plan adopted a guideline on non-insulin-dependent diabetes to help physicians identify when to provide intensive management rather than routine care to patients with this low-cost condition that can lead to high-cost complications. Another plan used a low back pain guideline that generated savings from the selective use of high-cost diagnostic imaging services. Plans have also reported cost savings from implementing guidelines that reduce the incidence of acute conditions and the need for more expensive care. One managed care chain we contacted increased the percentage of Medicare enrollees receiving flu shots from 27 to 55 percent in 1 year. The chain reported a reduction of about 30 percent in hospital admissions for pneumonia, savings of about $700,000, and fewer lives lost. Practice guidelines were also heavily used by plans that were being evaluated by employers buying health care for their workforce. Standardized measures for assessing health plan performance are set forth in the Health Plan Employer Data and Information Set (HEDIS), which many employers and other payers view as a report card. Purchasers can use HEDIS to compare plans across several preventive services measures, including childhood immunizations, cholesterol screening, breast cancer screening, cervical cancer screening, prenatal care in the first trimester, diabetic retinal examination, and ambulatory follow-up after hospitalization for depression. Of the 19 plans we contacted, 14 collected performance data using HEDIS measures. The adoption of practice guidelines may help plans improve their performance on HEDIS measures. For example, through the use of pediatric and adult preventive care guidelines, one plan claimed that it raised to 95 percent the number of its physicians meeting appropriate childhood immunization schedules and to 75 percent the number of its physicians meeting mammography screening goals. The plans also reported reducing the percent of breast cancers identified at advanced stages from 30 to 10 percent. In addition, plans’ adoption of guidelines is encouraged indirectly through health plan accrediting organizations. Although plans are generally not required to be accredited, many seek a review to satisfy purchasers’ demands and enhance their marketability. The National Committee on Quality Assurance’s (NCQA) accreditation standards require that plans have guidelines for the use of preventive health services. The Joint Commission on Accreditation of Healthcare Organizations also has standards that encourage the use of practice guidelines, but not specific guidelines. States are also influencing plans’ guideline use. For individuals covered under workers’ compensation, for example, Florida specifies guidance on the use of diagnostic imaging to treat low back pain. As states increasingly require plans to meet certain treatment standards, plans are likely to adopt guidelines that will help them comply with these requirements. Few of the plans we visited had the resources to devote to developing an original guideline, since such an effort can be time consuming and expensive. They preferred instead to customize guidelines that had already been published to ensure local physician involvement and acceptance of the guidelines and to accommodate their individual plan objectives. In general, health plans customized guidelines by modifying their scope or recommendations or emphasizing one of several therapy options presented. Because adapted guidelines differ from original guidelines to varying degrees, some experts in the guideline development community caution that certain modifications, when made to accommodate local self-interests at the expense of patients, may compromise the integrity of the guideline. Some of the plans we visited also expressed a need for more medical technology assessments and outcomes data; however, they lack the resources to assume these activities. They suggested that the federal government enhance its role in these areas. Among the most important reasons for not adopting published guidelines strictly as written is the need for local physician involvement and acceptance. Plan managers we interviewed noted that published guidelines usually lack the input of their local physician community. They recognized that some plan physicians are reluctant to put aside their own practice patterns in favor of those recommended by outside sources, particularly when guidelines are based more on expert opinion than on conclusive scientific evidence. Physicians have confidence in guidelines that they or their peers take part in developing or that are developed by their professional organization. Therefore, guidelines adopted by a consensus of local physicians are more likely to be accepted. In one plan manager’s view, without the physicians’ participation in approving the final product, physicians would not be likely to follow the guideline. In citing the need for physician acceptance of guidelines, one plan manager put it this way: “The practice of medicine is parochial.” Similarly, one large plan’s medical policy specialist told us that published guidelines need to be modified because they are often not consistent with local standards of care—that they are not “in synch” with how plan physicians are practicing. This position was corroborated by the American Medical Association’s Director of Practice Parameters, who said “a guideline can be developed at the national level, but it has to be localized . . .. t comes down to local areas developing the recommendations that suit them.” Plans selected practice guidelines from a variety of sources, including federal agencies and medical specialty societies, such as the American College of Physicians. Among the health plans we contacted, few had documentation on the methods they used to adapt guidelines. However, some described their approach as typically including some combination of physician consensus and a review of outcomes of clinical studies. When there was controversy or lack of strong clinical evidence, plans reported making greater use of local physician opinion and often performed independent literature reviews to provide additional information. This was particularly likely with a guideline on a rapidly changing treatment method, such as treatment for heart attacks, since clinical developments may overtake the publication of existing guidelines. Plans have a number of other reasons for customizing clinical practice guidelines. These issues include cost considerations, resource constraints, demographic characteristics of enrolled population, simplicity of guideline presentation, and the need to update information contained in published guidelines. Plans we visited noted that clinical practice guidelines often fail to provide needed information on what is cost-effective care. In its 1992 report, the Institute of Medicine recommended that a clinical practice guideline include information on both the health and cost implications of alternative treatment strategies. However, many guidelines produced by federal and private entities do not routinely include cost-effectiveness analysis in the recommendation-making process, often because the information needed to conduct cost analysis is not available. Plans we visited often consider the costs of alternative treatments in deciding how to implement a guideline. In some instances, a guideline may allow choices among equally effective therapeutic options. This was the case with AHCPR’s guideline on the treatment of depression in primary care settings, which stated: “No one antidepressant medication is clearly more effective than another. No single medication results in remission for all patients.” Instead, the guideline listed several types of drugs that were considered equivalent in clinical effectiveness. In implementing this guideline, one plan we contacted chose the least expensive class of drugs from AHCPR’s recommended list as its first-line treatment. The plan also noted that the selected drugs were older and their side effects were better known to its physicians. Some plans we visited also noted that guidelines may not recommend the most cost-effective health care. For example, some plans adapted a published guideline on total hip replacement that recommended that patients be admitted to the hospital the night before their surgery. The plans changed the recommendation so that patients were admitted the morning of their surgery, even though most of these patients were elderly and lived far from the hospital. One guideline expert argued that this was done to lower the cost of care with little regard for the inconvenience to or impact on the patient. Local customizing is also influenced by the amount and type of health care resources available to the plan. For example, the USPSTF’s colorectal cancer screening guideline recommends a periodic sigmoidoscopy or an annual fecal occult blood test or both. Plans with a sufficient number of physicians who are trained to perform sigmoidoscopies are more likely to choose the recommendation of screening with a periodic sigmoid test and may also perform the fecal occult blood test. However, those without enough trained physicians may decide to select only the fecal occult blood test. Some plans noted that guidelines may need to be tailored to allow for population differences in each locality. They cited research showing that differences in patients’ health need to be taken into account since socioeconomically different populations may have different incidence and prevalence rates of the disease. In particular, the research showed that Native American women required more frequent mammography screening due to their above-average incidence of breast cancer. Plans may also decide to recommend a wider application of diabetes screening services when their members are identified as having higher risk factors. The USPSTF guideline on diabetes states that there is insufficient evidence that routine screening is necessary. However, members of certain ethnic groups (Hispanics, African-Americans, Native Americans) are among those likely to benefit from screening tests. Therefore, plans may need to adapt guidelines to serve the needs of their more vulnerable populations. Plans also cited the need to customize to make the information in a guideline available in a more usable form. Guideline documents vary in length, from a three-page brochure to a two-volume manual. Some guidelines consist largely of decision-tree charts, called clinical algorithms, while others are predominantly text, providing a synthesis of scientific evidence, expert consensus, and references to specific research studies. Sometimes published guidelines are broad in scope and cover not only a full range of medical practices—including diagnosis, treatment, and follow-up care—but also the guideline development methodology and areas for future research. The comprehensiveness of such guidelines, designed to reach the broadest audience of practitioners as well as clinical researchers, may require a book-length presentation. Therefore, plans typically adapted such guidelines to focus on a narrower set of clinical needs, such as the pharmacological management of patients with heart failure. Several plans pointed to AHCPR’s 327-page guideline on primary care physicians’ treatment of depression as being too long and complicated for busy clinicians. One plan reduced it to 44 pages, another to 20 pages, and a third to 4 pages. (AHCPR has issued a shorter quick-reference version of this guideline, as it does with all its guidelines.) Format may also be an issue with practice guidelines developed by health plans. A prominent expert on guideline development noted that a mathematically based cholesterol screening guideline could not be implemented because the plan’s primary care physicians did not have time to follow the complicated guideline model. Sometimes the information in existing guidelines is not current. Medical information and technology, such as pharmacological management of a condition, is continually evolving. Yet, published guidelines may not be reviewed and revised on a timely basis. For example, NIH guidelines, called consensus statements, are not reviewed for at least 5 years after issuance. In fact, only about half of the plans we contacted reviewed and updated their guidelines annually. However, one plan published guidelines with an expiration date, forcing the plan to review the guidelines at least once annually. The extent of modifications that resulted from plans’ customizing published guidelines varied from minimal to substantial. Sometimes the differences between the local and published guidelines were cosmetic. For example, some individual medical groups prepared shortened versions of regionally developed guidelines on plastic cards for quick physician referral. They also removed the original source’s name and applied their logo to the documents to further enhance physicians’ sense of ownership. Other modifications were more than superficial. One plan customized AHCPR’s HIV guideline by adding drug treatments that were not covered in the original guideline, specifying when primary care physicians should refer patients to a specialist, and providing information on state reporting requirements. Finally, some changes could be considered substantial. For example, one plan we contacted relaxed the recent chicken pox vaccination guideline from the American Academy of Pediatrics. The Academy recommended that chicken pox vaccinations be given to all healthy children. The plan adapted the guideline by recommending that its physicians discuss the extent of immunity that the vaccine could confer and let parents decide whether they want the vaccine given to their children. The plan maintained that, because the immunity offered by the vaccine might not last a lifetime, it could result in more adult cases of chicken pox, an outcome that could result in serious harm or death. The plan held that it is better for children to contract chicken pox to ensure lifetime immunity than to get the vaccine. An Academy spokesperson commented that no significant loss of immunity has been demonstrated in healthy children who were vaccinated. At another plan, we found that a customized guideline recommended treatments specifically not endorsed by AHCPR. In its low back pain guideline, the plan recommended that physicians perform an invasive treatment to control pain and an invasive test to diagnose the extent of disc damage. However, AHCPR’s guideline stated that the benefits of this treatment and test were unclear and not worth the potential risk of infection to patients. A plan representative told us that their guideline was adapted to address the concerns of the plan’s orthopedists, who felt that the invasive treatment and test should have been included in the original guideline. “. . . to the extent that local adaptation, broadly defined, moves in the direction of excluding certain types of practitioners . . . or of weakening a guideline document fundamentally by allowing for the provision of marginally beneficial services in situations in which guidelines would probably say ‘this is inappropriate for this class of people’ —then you have what looks to me like a self-serving change.” “ . . . guidelines that recommend the best care practices to optimize outcomes for patients may not necessarily be cost-effective or easy for MCOs to implement. MCOs, with a commitment to the bottom line, may make modifications to guidelines to achieve their best interests and not those of patients.” Most plan managers we contacted applaud the various guidelines published by public and private entities. The availability of such guidelines makes plans’ guideline development efforts easier and less costly. Plans consider published guidelines to be useful summaries of the literature and science, written for a diverse audience. However, given the multiplicity of guideline sources, many plan managers told us they would prefer to see some federal agencies assume an alternative role in the guideline movement. Plans noted that having many federal and private-sector guidelines on the same topic is an inefficient use of limited resources. Furthermore, some of these guideline recommendations conflict, creating confusion for plan managers and practitioners. Plan managers also told us that their needs for medical technology assessments and outcomes data remain unmet. Some plan officials suggested that some federal agencies would provide a more useful service to managed care plans by not continuing to produce guidelines. Instead, they should publish and update summaries and evaluations of evidence on medical conditions and services so that plans could use this information to develop and update their own guideline recommendations. Other plans proposed that the federal government increase funding to develop useful practice guideline tools, such as methods to incorporate cost assessments and patient preferences into practice guidelines. Furthermore, several plans asserted that federal guideline funds should be used for outcomes research and technology assessment from which plans could develop their own guidelines. One plan manager said, “This is an area that health plans do not have the resources or expertise to adequately address.” Managed care plans’ growing interest in practice guidelines is driven by their need to control medical costs, ensure consistency of medical care, and demonstrate improved levels of performance. By using practice guidelines, plans are making a conscious decision about the care they intend to provide, reflecting the trade-off between costs and benefits. When published guidelines differ from a plan’s clinical and financial objectives, they are typically customized with the active participation of the network physicians. Since published guidelines can be inconsistent, outdated, or too complex, local adaptation may be useful. Yet some changes may compromise the quality of patient care. Moreover, local adaptation may undermine the goal of clinical practice guidelines, which is to make medical care more reliant on evidence-based recommended practices and less a function of where a patient receives care. Comments on a draft of this report were obtained from the American Association of Health Plans, AHCPR, and two experts on guideline development and use. The American Association of Health Plans generally agreed with the draft, but suggested language changes where the report addressed the goal of reducing cost. They stated that practice guidelines are intended primarily to improve the quality and outcomes of care and secondarily to contain costs. We agree that plans use guidelines for quality improvement as well as cost management. AHCPR noted that managed care plans’ views on the federal role of guideline activities were similar to the agency’s views and its plans for the future. The agency also provided technical comments, and we have incorporated its suggested changes and those of the expert reviewers as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to interested parties and make copies available to others on request. Please call me at (202) 512-7119 if you or your staff have any questions. Other major contributors include Rosamond Katz, Donna Bulvin, Mary Ann Curran, Hannah Fein, and Jenny Grover. HMO model type(s) Enrollment (as of 1995) Minneapolis, Minn. Coral Gables, Fla. Woodland Hills, Calif. Woodland Hills, Calif. Virginia Beach, Va. Columbia, Md. Bethesda, Md. Seattle, Wash. Wellesley, Mass. Minneapolis, Minn. Chicago, Ill. Rockville, Md. Pasadena, Calif. Rockville, Md. Virginia Beach, Va. Mercer Island, Wash. Baltimore, Md. Fort Lauderdale, Fla. Jacksonville, Fla. Hartford, Conn. Louisville, Ky. IPA; network; staff Consolidated with CIGNA Healthcare of Richmond, Va. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed how managed health plans make use of existing clinical practice guidelines. GAO found that: (1) clinical practice guidelines promote greater uniformity within physician networks, encourage improved efficiency and clinical decision-making, and eliminate unnecessary care; (2) several health plans have adopted clinical practice guidelines to control costs, improve performance on standardized measures, receive accreditation, and comply with regulatory requirements; (3) due to time and fiscal constraints, many health plans customize published clinical guidelines rather than generate original guidelines; (4) physicians are more likely to use a clinical practice guideline if it is developed by local health providers; (5) managed health plans customize existing clinical practice guidelines to suit alternative treatments, available resources, population needs, and format and currency concerns; (6) while health plans modify existing clinical practice guidelines to varying degrees, extensive changes could jeopardize the guidelines' effectiveness; and (7) some health plans would prefer that the federal government publish and update evidence on medical conditions and services, develop useful practice guideline tools, and perform outcomes research and medical technology assessments that would help them to develop, modify, and update their guidelines. |
In 1996, SAMHSA issued a regulation implementing the Synar amendment. The regulation requires all 50 states, the District of Columbia, and eight insular areas to (1) have in effect and enforce laws that prohibit the sale and distribution of tobacco products to people under 18 years of age, (2) conduct annual random, unannounced inspections, using a valid probability sample of outlets that are accessible to youth, of all tobacco outlets within the state to estimate the percentage of retailers who do not comply with the laws, and (3) report the retailer violation rates to the Secretary of HHS in their annual SAPT block grant applications. SAMHSA requires that each state reduce its retailer violation rate to 20 percent or less by fiscal year 2003. SAMHSA and each state negotiated interim annual target rates that states are required to meet to indicate their progress toward accomplishing the 20 percent goal. Beginning in fiscal year 1997 for most states and in subsequent years for all states, the Secretary can withhold 40 percent of a state’s Substance Abuse Prevention and Treatment (SAPT) block grant award if it does not comply with the rate reduction requirements. State fiscal year 2000 SAPT block grant awards ranged from about $2.5 million to $223 million. Also in 1996, SAMHSA provided guidance to states on implementing Synar requirements. SAMHSA issued sample design and inspection guidanceto help states comply with the Synar requirement for conducting random, unannounced inspections of tobacco outlets to estimate the statewide violation rate. The guidance consists primarily of recommended strategies to give states flexibility in selecting a sample design and inspection protocol tailored to their particular circumstances, including state and local laws. For example, SAMHSA’s inspection protocol guidance suggests that states recruit minors to attempt to purchase tobacco products when conducting inspections but gives states some flexibility regarding the ages of the minors that are used. SAMHSA’s guidance requires states to develop and implement a consistent sample design from year to year and a standardized inspection procedure for all inspections so that measurements of violation rates over time are comparable across jurisdictions within a state. SAMHSA’s guidance includes a Synar requirement that the states enforce their laws in a manner that can reasonably be expected to reduce the extent to which tobacco products are available to minors. The guidance suggests that states use a variety of activities in their enforcement strategy, such as merchant education, media and community involvement, and penalties. The enforcement activities could be conducted by different agencies, such as those responsible for substance abuse prevention and treatment programs, law enforcement, and state health departments. SAMHSA reviews state-reported information to determine whether states have complied with requirements for enforcing state laws and conducting random unannounced inspections of retail tobacco outlets. In addition to requiring states to provide evidence of their enforcement activities, SAMHSA requires states to provide their sampling methodology, inspection protocol, and tobacco outlet inspection results in their annual SAPT block grant applications. In its review, SAMHSA and its contractordetermine whether (1) the sample size is adequate to estimate the statewide violation rate and all tobacco outlets (including over-the-counter and vending machines) in the state have a known probability of being selected for inspection; (2) the state assessed the accuracy of lists used to identify the universe of tobacco outlets from which its sample is drawn; (3) the sample design and inspection protocols are consistently implemented each year within the state; and (4) the statewide violation rate is correctly calculated, meets the negotiated annual target, and shows progress toward the 20-percent goal. When data provided in the application are not sufficient to determine state compliance, SAMHSA requests additional information from the state before a final decision on state compliance is made. SAMHSA collects the state-reported data from the SAPT block grant applications and in 1996, began storing it in an automated database. These data are used to monitor states’ compliance with Synar requirements, compare state progress from year to year, and produce an annual report to the Secretary of HHS and the Congress on Synar implementation. SAMHSA also uses the data to help finalize the states’ annual retailer violation rates, which are released to the public. For fiscal years 1997 through 1999, the states’ reported violation rates showed an overall increase in retailer compliance with state laws prohibiting the sale of tobacco products to minors. The median retailer violation rate declined from 40 percent in 1997 to 24.2 percent in 1999. Violation rates range from 7.2 percent in Florida to 72.7 percent in Louisiana for 1997 and from 4.1 percent in Maine to 46.8 percent in the District of Columbia for 1999. SAMHSA has cited 10 states over the 3-year period for being out of compliance with Synar requirements because they did not reach their violation-rate target. The Secretary of HHS, however, has not reduced any state’s SAPT block grant for noncompliance with Synar. In fiscal years 1997 and 1998, states that failed to comply with Synar requirements were not assessed a penalty because they successfully argued that there were extraordinary circumstances that hindered their inspection efforts. The states that were faced with a potential penalty by the Secretary of HHS for failing to reach their fiscal year 1999 target rates chose to commit additional funds to ensure compliance with the following year’s violation- rate target. State Synar implementation practices and SAMHSA oversight adversely affect the quality and comparability of state-reported retailer violation rates. Although SAMHSA approved states’ sample designs, inspection protocols, and inspection results, the quality of the estimated statewide violation rates reported for fiscal years 1998 and 1999 is undermined because of several factors: First, some states used inaccurate and incomplete lists from which to select samples of tobacco outlets to inspect. Second, most states used minors younger than 16 to inspect tobacco outlets, and SAMHSA instructed the states to tell minors not to carry identification on inspections. Both of these protocols tend to lower the violation rate. Third, SAMHSA approved some states’ violation rates even though they included invalid inspections. Fourth, SAMHSA relied on states to validate violation rates without ensuring that the accuracy of the supporting data was verified, even though a potential reduction in a state’s block grant award for not complying with Synar could be an incentive to report artificially low rates. These data quality factors, coupled with the lack of standardization in the protocols states use when inspecting outlets, limit the comparability of retailer violation rates across states. According to SAMHSA officials, some states used inaccurate and incomplete lists to select random statistical samples of tobacco outlets to inspect, which could have affected the validity of the samples and compromised violation rates reported for fiscal years 1998 and 1999. Most states used a list-based sampling methodology in their sample design, as SAMHSA recommends. When states use list-based sampling to select a sample of tobacco outlets for inspection, SAMHSA requires that they report evidence that they have verified the accuracy and completeness of lists for both over-the-counter and vending machine outlets. However, we found that for fiscal year 1998, 40 states reported to SAMHSA that they did not know the accuracy of the lists they were using. States can use different lists to develop their population of tobacco outlets, but the accuracy and completeness of these lists vary. For example, states can use lists of state-licensed tobacco outlets, but these lists are not always updated by the responsible state agencies. Also, national and state commercial listings can be used, but they often contain many establishments that do not sell tobacco products or may identify the owners of the business but not necessarily each retail outlet. In some rural areas and Midwestern states, developing a complete list of outlets can be difficult because tobacco products are sometimes sold from individuals’ homes or other places that are not known to be tobacco outlets. Comments made by several state officials indicate a need by some states for more technical assistance from SAMHSA in addressing state-specific issues—particularly sample design—that affect their compliance with Synar. Accurately identifying the population of vending machine outlets accessible to youth in a state is also important, according to SAMHSA’s fiscal year 1997 report of Synar implementation and other documents, because vending machines have been a major source that children use to obtain tobacco products. In our review of the state data that SAMHSA provided from SAPT block grant applications for fiscal year 1999, we found that of the 37 states reporting that they inspected vending machine outlets, 11 did not report the population of vending machines accessible to youth in their states as SAMHSA requires. (See app. I) Further, our review of a few block grant applications showed that states reported that they inspected vending machine outlets when they found them during random inspections of over-the-counter outlets. Some states have had difficulty developing accurate and complete lists of vending machine outlets, in particular, because many of the machines are privately owned and their portability makes them difficult to track. Officials we interviewed told us that over the years there has been a significant decline in vending machine tobacco outlets accessible to minors. However, an NGA representative said that vending machines are and will continue to be a source of tobacco products for minors in some states. The results of a 1999 national survey of middle school and high school students’ access to cigarettes show that vending machines continue to be a source of tobacco products for youth, particularly middle school students. For example, when students were asked where during the past 30 days, they bought their last pack of cigarettes, 2.7 percent of the high school students reported that their purchase was from vending machines. However, 12.9 percent of middle school students reported their last pack of cigarettes was purchased from vending machines. SAMHSA officials told us that states need to be more aggressive in identifying tobacco outlets. An NGA study of best practices in implementing and enforcing Synar requirements notes that programs that require tobacco retailers to be licensed provide an effective source of information for identifying the outlets. Not all states, however, require tobacco outlets to be licensed. SAMHSA officials said that they believe tobacco licensure programs that require the identification of every tobacco outlet and regular license renewals afford states the best opportunity to develop accurate and complete statewide lists of over-the- counter and vending machine tobacco outlets. However, in comments on a draft of this report, HHS stated that SAMHSA does not have the authority to license tobacco retailers or require states to enact legislation mandating tobacco retailer licensing or registration. The quality of states’ violation rates can be particularly affected by the age of the minors used to inspect the tobacco outlets. Research shows that minors who are younger than 16 years of age are much less successful at purchasing tobacco products than older youths. Research also shows,and SAMHSA officials told us that, a small difference in the age of minors can make a significant difference in a state’s violation rate because the younger the minor inspectors appear, the less likely store clerks will sell them tobacco. As a result, using minors younger than 16 could bias the outcome of state inspections by lowering the violation rate. Even though SAMHSA officials are aware of the research results, they allow states to include minors younger than 16 in their inspection protocols. SAMHSA’s inspection protocol guidance recommends that states use 15- and 16-year- olds as inspectors because minors younger than 15 are likely to look very young, and their appearance could discourage some retailers from selling them tobacco products. Nearly all states report using as inspectors, youth from a combination of two age cohorts, 14- and 15-year-olds and 16- and 17-year-olds. For fiscal year 1999, 43 states reported using 14- and 15- year-olds as inspectors, and 16 of these states used them in more than 50 percent of their inspections. (See app. II.) Five of the 16 states (Georgia, New Hampshire, North Carolina, Tennessee, and Texas) reported the highest percentages of inspections that were conducted by 14- and 15- year-olds--73 percent to 94 percent. (See fig. 1.) Four of the 5 states also reported that a large proportion of their fiscal year 1998 inspections were conducted by 14- and 15-year-olds. Tennessee and Texas officials told us they did not purposely try to recruit large numbers of 14- and 15-year-olds. They said that they selected those minors that were willing to participate in the inspections. Inspection data supporting the violation rates for North Carolina and Tennessee show that inspections conducted by 14- and 15-year-olds resulted in lower purchase rates than inspections by 16- and 17-year-olds. For example, Tennessee reported that 14- and 15-year-old inspectors were able to purchase tobacco 16 percent of the time, whereas the 16- and 17- year-olds had a 51-percent purchase rate. New York state officials’ analysis of their state inspection results for fiscal year 2000 showed that 14- and 15- year-olds were able to purchase tobacco 8 percent of the time, whereas the 16- and 17-year olds had a 21-percent purchase rate. At the time of our review, SAMHSA officials told us that they had not thoroughly examined states’ use of 14- and 15-year-old inspectors and the potential impact on retailer violation rates, but they acknowledged that this is something that will require a more comprehensive evaluation. Another age-related inspection protocol procedure that can affect retailer violation rates is whether minor inspectors are told to carry valid identification on inspections and required to show it when asked. The research on this issue is mixed. Some research suggests that when minors are asked to show identification, retailers are less likely to sell them tobacco products. Other research suggests, and some state officials told us, that the likelihood of an illegal sale is greater if minors show identification when asked than if identification is not shown. As a result, having and showing identification when asked could potentially result in an illegal tobacco sale and a higher retailer violation rate. About half of the illegal sales in one state’s inspections occurred after the minor showed proof of age. Research suggests that some clerks may sell minors tobacco products because they have difficulty quickly determining an individual’s age from a date-of-birth on his or her identification. According to HHS, because of safety concerns, SAMHSA recommends that minors not carry identification but answer truthfully about their age if asked by a store clerk. Research also suggests that the sex of the minor inspector can bias the inspection result. For example, when controlling for the effects of both age and sex of the inspector, one researcher found that girls were able to purchase at a 39-percent rate compared to boys who had a 28-percent purchase rate. Unlike previous research, this research controlled for the effects of both age and sex. SAMHSA approved four states’ retailer violation rates for fiscal years 1998 and 1999 that were inaccurately calculated because they included inspections in which the ages of minor inspectors and the inspection results were not known. SAMHSA requires states to report the ages of minor inspectors in part to confirm that the ages of the inspectors are within an acceptable range. When the ages of minors used in state inspections are unknown, SAMHSA officials told us that they consider the inspections invalid, and the inspection results should be excluded from the violation rate computation. However, we found that SAMHSA approved and published violation rates reported by Florida, Kansas, Louisiana, and Minnesota that included inspection results in which the ages of the minor inspectors were unknown. Moreover, three of these states’ violation rates included some inspections where neither the age of the minors nor the outcomes of the inspections were known. Had the invalid inspections been excluded, the violation rates for Florida, Louisiana, and Minnesota would have been higher (See table 1.) However, none of the four states would have missed its target based on the recalculated rate. SAMHSA officials said that there were reasons for accepting the states’ violation rates. For example, they said that they did not exclude Kansas’ invalid inspections because the state provided the outcomes of the inspections. Even though Florida’s retailer violation rate was based entirely on inspections in which the ages of the inspectors and the outcomes by age were unknown, SAMHSA accepted the rate because of the large number of inspections the state conducted and its low reported violation rate. SAMHSA did not ensure that the accuracy of the data that states used to support their fiscal year 1998 and 1999 estimates of retailer violation rates was verified. SAMHSA reviewed the information states reported in their SAPT block grant applications. However, SAMHSA relied on the states to assess the quality of the data they used to develop their rates, even though the potential 40-percent reduction in a state’s block grant for not meeting annual violation rate goals could provide an incentive for some states to report artificially low violation rates. To improve their oversight, during the time of our review, SAMHSA officials completed pilot testing of their state data review protocol and began visiting states to evaluate their systems of data collection and documentation for Synar implementation. The draft review protocol SAMHSA officials said they were using includes questions about the states’ sampling and inspection procedures and practices that could help in making an assessment of the quality of the data states used to develop violation rates. SAMHSA officials said that because of resource constraints, they plan to conduct these reviews approximately once every 3 to 4 years for each state. Differences in how states implement their inspection protocols, along with data quality weaknesses, limit the comparability of retailer violation rates across states. SAMHSA does not require all states to use the same set of protocols when conducting inspections of tobacco outlets. Although SAMHSA provides inspection guidelines, each state is allowed the flexibility to develop inspection protocols in keeping with its own circumstances, including restrictions in state law. Given this flexibility, there is inconsistent implementation of inspection protocols across states, which makes comparisons of retailer violation rates difficult. States’ use of different ages and sexes of minor inspectors and different criteria in determining what type of tobacco sale is a violation punishable under state law can limit comparisons of violation rates across states. For example, the ages of minor inspectors is an issue in comparisons because some states use higher proportions of younger inspectors than other states and younger minors tend to have lower purchase rates than older minors. Also, the states’ use of minor boys and girls as inspectors in different proportions can limit comparisons of violation rates because females tend to have higher tobacco purchase rates than males. Another inspection procedure that can limit the comparability of violation rates between states is whether the state uses the “consummated” or the “unconsummated” buy protocol. In a consummated buy, the minor inspector completes the purchase and takes possession of the tobacco product, whereas in an unconsummated buy the minor inspector attempts or asks to purchase the tobacco product and the clerk accepts payment, but the inspector leaves without taking the product. Some states use the unconsummated-buy protocol to protect minor inspectors, who cannot legally purchase tobacco products. For Synar inspections, if a sale is made, it is considered a successful attempt, or a violation, regardless of which protocol is used. However, according to SAMHSA and other officials we interviewed, choice of the buy protocol can affect a state’s violation rate. When the unconsummated-buy protocol is used, there could be a question of whether a violation of state law actually occurred if the minor did not take possession of the tobacco product. Some merchants are challenging in court the penalties states assess under state law for violations based on unconsummated buys. If these challenges are upheld or not resolved in those states, merchants may continue to sell tobacco products to minors because they would not expect a penalty for their actions and the states’ retailer violation rates could be adversely affected. This inconsistent application of the consummated- and unconsummated-buy protocols by states and the potential effect on retailer violation rates could limit comparison of rates across states. SAMHSA’s fiscal year 1999 data show that 39 states used the consummated-buy protocol and 12 states used the unconsummated-buy protocol when inspecting tobacco outlets. (See app. I.) Comparing retailer violation rates across states could be useful in determining national progress toward the goal of reducing minors’ access to tobacco products and in identifying best practices used by states that seem to be making better progress than others. Because of the lack of uniform inspection protocols across states, however, SAMHSA officials and others do not suggest making such comparisons. A little more than half the states reported in their fiscal year 1999 block grant applications that violators of youth tobacco access laws were penalized as part of the state’s enforcement strategy. All states have laws that allow the use of penalties, but not all states reported that penalties were assessed, according to SAMHSA data. The states reported using a variety of enforcement actions, such as warnings, fines, and suspensions of retailers’ licenses. SAMHSA officials said that in their review of state- reported information for Synar compliance, they look for evidence of active enforcement, such as the assessment of penalties, and make inquiries to state officials when the evidence is not apparent. However, SAMHSA officials also said that ensuring state enforcement of youth tobacco access laws has not been their primary focus because they were relying on FDA’s enforcement activities, which included assessing monetary civil penalties against retailers. The officials said that because of the discontinuation of FDA’s program, they need to examine states’ evidence of active enforcement more closely to ensure that states are enforcing their youth tobacco access laws. Research shows that enforcement strategies that include the assessment of penalties are successful at reducing minors’ access to tobacco products. In our review of SAMHSA’s summary data for fiscal year 1999, we found that 28 states reported specific evidence of having imposed penalties for violations of state youth tobacco access laws. (See app. I.) These penalties included fines against retailers and sales clerks and the suspension or revocation of retailers’ licenses. Seven states reported that they took other law enforcement actions against violators, such as issuing warning letters or citations. All states have laws that allow the assessment of penalties, but not all states reported using penalties as part of their enforcement strategies. For fiscal year 1999, for example, although states have the flexibility to determine which enforcement strategies are appropriate for compliance with Synar, SAMHSA maintains that state laws are more successful in changing retailer behavior regarding selling tobacco to minors when penalties are used, and SAMHSA encourages states to use them. Florida is an example of a state that has adopted a statewide enforcement strategy that penalizes violators of its youth tobacco access laws. In its fiscal year 1998 application, Florida reported that 3 percent of the merchants who were found out-of-compliance with the state’s law had their licenses revoked or suspended and 93 percent were assessed fines ranging from $250 to $1,000. SAMHSA officials said they look for evidence of active enforcement, such as the assessment of penalties, in state- reported information on Synar compliance and in some cases ask the state for an explanation when the evidence is not apparent. SAMHSA officials also said, however, that prior to the discontinuance of the FDA tobacco control program in March 2000, they relied on FDA to ensure enforcement of requirements to reduce youth access to tobacco products. As a regulatory agency, FDA took an approach different from that taken by SAMHSA in prohibiting the sale of tobacco products to minors. FDA’s discontinued tobacco control program focused on enforcement and required that penalties be assessed against repeat violators of FDA’s regulation. FDA contracted with states to conduct inspections of tobacco outlets. FDA’s contract stipulated that each state conduct at least 375 unannounced monthly compliance inspections of merchants that sold tobacco products over-the-counter, and states were instructed to re- inspect violators. FDA’s goal was to have compliance checks performed throughout the entire state. If an inspection resulted in a violation, the state was expected to re-inspect the establishment within 90 days and continue inspections until compliance was achieved. For the first violation, the retailer would receive a warning letter. For subsequent offenses, civil monetary penalties were to be assessed ranging from $250 for a second offense to $10,000 for a fifth offense. At the time the program was discontinued, FDA had imposed a maximum penalty of $1,500 and collected an estimated total of $1 million. Although states were allowed to use FDA contract funds for enforcement, SAMHSA officials said that states are permitted to use SAPT block grant funds for enforcement activities only if a citation is issued for a violation at the time of the inspection. States are permitted to use SAPT block grant funds to develop sample designs and conduct inspections of tobacco outlets. SAMHSA officials told us that states would need federal funds to support broader enforcement activities now that FDA’s program has been discontinued. Although NGA recognizes the importance of funding enforcement, an NGA representative told us that the association is not currently advocating additional federal funding for state enforcement activities. In commenting on this report, HHS noted that state funds and tobacco settlement funds are other possible sources of funding for enforcement activities. Officials for SAMHSA, FDA, and a state we consulted told us that they believe that without FDA’s enforcement of its regulation against the sale of tobacco products to minors, some tobacco retailers will become more lax and sales to minors will increase. FDA officials also said they do not believe tobacco retailers will change their behavior without knowing that violations will result in penalties. SAMHSA officials said that they have not focused as much on state enforcement actions under Synar implementation because of their reliance on FDA to enforce its tobacco control regulation, which included penalties against retailers. They said that because FDA’s program was discontinued in March 2000, they see the need to ensure that states show evidence of active enforcement of their laws. Research suggests that enforcement strategies that incorporate inspections of all retailers followed by penalties and re-inspections are successful in reducing the availability of tobacco to minors. The components of an effective enforcement strategy include an enforceable law with penalties sufficiently severe to deter potential violators, according to the research. NGA concluded from its interviews with representatives of state agencies on best practices in enforcing Synar that the single most effective factor in reducing tobacco access to minors is the establishment of a statewide inspection and enforcement program that holds merchants and clerks accountable for their actions. Some state officials told us they believe that aggressive penalties assessed against the retailer can be very effective in changing merchant behavior. New York, for example, plans to begin confiscating merchants’ lottery licenses for failure to comply with laws prohibiting the sale of tobacco products to minors. The goal of the Synar amendment is to help reduce the sale of tobacco products to minors through state laws that make it illegal for retailers to sell them tobacco products. States are responsible for enacting and enforcing laws that restrict youth access to tobacco products and for reporting the progress in retailer compliance with Synar requirements. However, state implementation of Synar and SAMHSA’s oversight raise concern about the quality of state estimates of the percentage of retailers that sell tobacco products to minors. These concerns center on the use of inaccurate lists of retail outlets from which to draw a sample to inspect; the use of inspection protocols among the states that could bias retailer violation rates and limit their comparability, such as the age of minor inspectors; the acceptance of violation rates that contain invalid inspection results; and the reliance on states to validate their inspection results without ensuring that the supporting data are verified. SAMHSA recently began visiting states to check their inspection practices, but more could be done to improve the quality of the inspection results and enhance the usefulness of retailer violation rates in evaluating national progress toward reducing minors’ access to tobacco products. The states have flexibility in developing strategies to help enforce their youth tobacco access laws. According to researchers and state and SAMHSA officials, assessing penalties for selling tobacco to minors, as done under FDA’s program, can be an effective enforcement tool for reducing minors’ access. For fiscal year 1999, a little more than half the states reported evidence of using penalties to help enforce their laws. In its oversight of state enforcement activities, SAMHSA has decided to more closely examine states’ use of different enforcement strategies, including the assessment of penalties as sanctions against violators of youth tobacco access laws. To help ensure the quality of states’ estimates of tobacco retailer violation rates under the Synar amendment and to make the rates more comparable across states, we recommend that the Secretary of HHS direct the Administrator of SAMHSA to help states improve the validity of their samples by working more closely with them in developing ways to increase the accuracy and completeness of the lists of tobacco outlets from which they draw random samples for inspections; revise the inspection protocol guidance to better reflect research results, particularly regarding the ages of minor inspectors, and work with states to develop a more standardized inspection protocol consistent with state law, and more uniform implementation across states; and ensure that all states’ retailer violation rates exclude invalid inspections, particularly those in which the ages of minors and outcomes of inspections are unknown. We obtained comments on a draft of this report from HHS. (See app. III for agency comments.) In general, HHS agreed with our findings and recommendations and found our report to be useful guidance for future changes in Synar implementation. HHS disagreed with our recommendation that SAMHSA require more standardization in inspection protocol development consistent with state laws and more uniform implementation across states. HHS stated that this action would accomplish very little in the way of meaningful comparisons of violation rates across states without federal legislation requiring states to modify their practices and possibly lead to changes in state laws pertaining to inspection protocols. We believe, however, that federal legislation may not be necessary. There are consistencies that currently exist in inspection protocols among many of the states, such as in the ages of minors used to conduct inspections. Identifying other key inspection protocols that states may be able to adopt, such as whether minor inspectors should carry identification, would provide a core group of protocols that could enhance comparisons of retailer violation rates across states. In light of HHS’ comment, however, we revised our recommendation to have the Secretary of HHS direct SAMHSA to collaborate with states in developing more standardization in protocols and uniform implementation across states. HHS officials also provided comments intended to increase the report’s accuracy. Where appropriate, we have incorporated HHS’ suggested changes and technical comments in this report. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to others who are interested and make copies available to others who request them. If you or your staff have any questions about this report, please contact me at (202) 512-7119 or James O. McClyde at (202) 512-7152. Darryl W. Joyce, Paul T. Wagner, Jr., and Arthur J. Kendall made key contributions to this report. Warnings Fines Fines Fines Fines Fines, summonses, warnings Fines, citations License suspensions Defendants charged under misdemeanor statute Type of law enforcement action taken License suspensions, fines warnings Fines, warnings Fines, citations Fines, citations Fines, warnings, citations Fines Fines, warnings Fines, citations Citations, warnings State laws or regulations either banned tobacco vending machines or restricted youth access. According to SAMHSA officials, states that have laws that restrict tobacco vending machines are not required to inspect them. Specific law enforcement action taken was not reported. | Every day, about 3,000 young people become regular smokers. It is estimated that one-third of them will die from smoking-related diseases. If children and adolescents can be prevented from using tobacco products they are likely to remain tobacco-free for the rest of their lives. In 1992, Congress enacted legislation, known as the Synar amendment, to reduce the sale and distribution of tobacco products to individuals under the age of 18. States are required to enforce laws that prohibit tobacco sales to minors, conduct random inspections of tobacco retail or distribution outlets to estimate the level of compliance with Synar requirements, and report the results of these efforts to the Department of Health and Human Services (HHS). The Synar amendment and regulation are the only federal requirements that seek to prohibit the sale and distribution of tobacco products to minors. GAO found that weaknesses in the states' implementation of Synar and in HHS oversight may be adversely affecting the quality and comparability of state-reported estimates of the percentage of retailers that violate laws prohibiting tobacco sales to minors. First, some states used inaccurate and incomplete lists of over-the-counter and vending machine tobacco outlets from which to select samples for inspection, which affect the estimated statewide violation rate. Second, states allowed the use of minors younger than 16 as inspectors, even though research suggests that using such minors can artificially lower violation rates. Third, HHS approved a few states' reported violation rates even though the rates included inspection results that were invalid because of the ages of the inspectors and the outcomes of the inspections were unknown. Fourth, HHS relied on states to validate their own inspection results with limited verification of the accuracy of state data even though the potential reduction in a state's block grant award for not meeting annual violation-rate goals could be an incentive for states to report artificially low rates. A little more than half the states reported for fiscal year 1999 that they used fines and suspension or revocation of retailers' licenses to penalize violators of youth tobacco access laws as part of their enforcement strategy. States also reported issuing warning letters and citations. HHS requires states to report evidence of actions taken to enforce state laws but does not require the use of penalties as an enforcement tool. Research shows that penalties reduce minors' access to tobacco products. |
The mission of the Customs Service is to ensure that all goods and persons entering and exiting the United States do so in compliance with all U.S. laws and regulations. It does this by (1) enforcing the laws governing the flow of goods and persons across the borders of the United States and (2) assessing and collecting duties, taxes, and fees on imported merchandise. During fiscal year 1997, Customs collected $22.1 billion in revenue at more than 300 ports of entry, and it processed nearly 450 million passengers who entered the United States during the year. To accomplish its mission, Customs is organized into six business areas—trade compliance, outbound, passenger, finance, human resources, and investigations. Each business area is described below. The trade compliance business area includes enforcement of laws and regulations associated with the importation of goods into the United States. To enforce compliance with the trade laws and regulations, Customs (1) works with the trade community to promote understanding of applicable laws and regulations, (2) selectively examines cargo to ensure that only eligible goods enter the country, (3) reviews documentation associated with cargo entries to ensure that it is properly valued and classified, (4) collects billions of dollars annually in duties, taxes, and fees associated with imported cargo, (5) assesses fines and penalties for noncompliance with trade laws and regulation, and (6) manages the collection of these moneys to ensure that all trade-related debts due to Customs are paid and properly accounted for. The outbound business area includes Customs operations related to the enforcement of laws and regulations associated with the movement of merchandise and conveyances from the United States. To enforce compliance with these laws and regulations, Customs (1) selectively inspects cargo at U.S. ports to guard against the exportation of illegal goods, such as protected technologies, stolen vehicles, and illegal currency, (2) collects, disseminates, and uses intelligence to identify high-risk cargo and passengers, (3) seizes and accounts for illegal cargo, (4) assesses and collects fines and penalties associated with the exportation of illegal cargo, and (5) physically examines baggage and cargo at airport facilities for explosive and nuclear materials. In addition, the outbound business includes collecting and disseminating trade data within the federal government. Accurate trade data are crucial to establishing accurate trade statistics on which to base trade policy decisions and negotiate trade agreements with other countries. By the year 2000, Customs estimates that exports will be valued at $1.2 trillion, as compared to $696 million in 1994. The passenger business area includes processing all passengers and crew of arriving and departing (1) air and sea conveyances and (2) noncommercial land vehicles and pedestrians. In fiscal year 1997, Customs processed nearly 450 million travelers, and by the year 2000, expects almost 500 million passengers to arrive in the United States annually. Many of Customs’ passenger activities focus on illegal immigration and drug smuggling and are coordinated with other federal agencies, such as the Immigration and Naturalization Service and the Department of Agriculture’s Animal and Plant Health Inspection Service. Activities include targeting high-risk passengers, which requires timely and accurate information, and physically inspecting selected passengers, baggage, and vehicles to determine compliance with laws and regulations. The finance business area includes asset and revenue management activities. Asset management consists of activities to formulate Customs’ budget; properly allocate and distribute funds; and acquire, manage, and account for personnel, goods, and services. Revenue management encompasses all Customs activities to identify and establish amounts owed Customs, collect these amounts, and accurately report the status of revenue from all sources. Sources of revenue include duties, fees, taxes, other user fees, and forfeited currency and property. The revenue management activities interrelate closely with the revenue collection activities in the trade compliance, outbound, and passenger business areas. The human resources business area is responsible for filling positions, providing employee benefits and services, training employees, facilitating workforce effectiveness, and processing personnel actions for Customs’ 18,000 employees and managers. The investigations business area includes activities to detect and eliminate narcotics and money laundering operations. Customs works with other agencies and foreign governments to reduce drug-related activity by interdicting (seizing and destroying) narcotics, investigating organizations involved in drug smuggling, and deterring smuggling efforts through various other methods. Customs also develops and provides information to the trade and carrier communities to assist them in their efforts to prevent smuggling organizations from using cargo containers and commercial conveyances to introduce narcotics into the United States. To carry out its responsibilities, Customs relies on information systems and processes to assist its staff in (1) documenting, inspecting, and accounting for the movement and disposition of imported goods and (2) collecting and accounting for the related revenues. Customs’ Office of Information and Technology (OIT) fiscal year 1998 budget is about $147 million for information management and technology activities. Customs expects its reliance on information systems to increase as a result of its burgeoning workload. For 1995 through 2001, Customs estimates that the annual volume of import trade between the United States and other countries will increase from $761 billion to $1.1 trillion. This will result in Customs processing an estimated increase of 7.5 million commercial entries— from 13.1 million to 20.6 million annually—during the same period. Recent trade agreements, such as the North American Free Trade Agreement (NAFTA), have also increased the number and complexity of trade provisions that Customs must enforce. Customs recognizes that its ability to process the growing volume of imports while improving compliance with trade laws depends heavily on successfully modernizing its trade compliance process and its supporting automated systems. To speed the processing of imports and improve compliance with trade laws, the Congress enacted legislation that eliminated certain legislatively mandated paper requirements and required Customs to establish the National Customs Automation Program (NCAP). The legislation also specified certain functions that NCAP must provide, including giving members of the trade community the capability to electronically file import entries at remote locations and enabling Customs to electronically process “drawback” claims. In response to the legislation, Customs began in 1994 to reorganize the agency, streamline operations, and modernize the information systems that support operations. As computer-based systems have become larger and more complex over the last decade, the importance of and reliance on information systems architectures have grown steadily. These comprehensive “construction plans” systematically detail the full breadth and depth of an organization’s mission-based “modus operandi” in (1) logical terms, such as defining business functions and providing high-level descriptions of information systems and their interrelationships, and (2) technical terms, such as specifying hardware, software, data, communications, security, and performance characteristics. Without an architecture to guide and constrain a modernization program, there is no systematic way to preclude either inconsistent system design and development decisions or the resulting suboptimal performance and added cost associated with incompatible systems. The Congress and the Office of Management and Budget (OMB) have recognized the importance of agency information systems architectures. The 1996 Clinger-Cohen Act, for example, requires Chief Information Officers (CIO) to develop, maintain, and facilitate integrated system architectures. In addition, OMB has issued guidance that among other things, requires agency’s information systems investments to be consistent with federal, agency, and bureau architectures. OMB has also issued guidance on the development and implementation of agency information technology architectures. Treasury has also issued to its bureaus, including Customs, guidance on developing an information systems architecture. This guidance, known as Treasury Information Systems Architecture Framework (TISAF), is also included in OMB’s guidance. According to Treasury, TISAF is intended to help reduce the cost, complexity, and risk associated with information technology development and operations. In July 1997, Treasury issued additional guidance to complement TISAF. This guidance, which was finalized in September 1997, provides “how to” processes for developing an information systems architecture in accordance with TISAF. Customs has several efforts underway to develop and acquire new information systems and evolve (i.e., maintain) existing ones to support its six business areas. Customs’ fiscal year 1998 budget for information management and technology activities is about $147 million. Customs’ major information technology effort is its Automated Commercial Environment (ACE) system. In 1994, Customs began to develop ACE to replace its existing automated import system, the Automated Commercial System. ACE is intended to provide an integrated, automated information system for collecting, disseminating, and analyzing import-related data and ensuring the proper collection and allocation of revenues, totaling about $19 billion annually. According to Customs, ACE is planned to automate critical functions that the Congress specified when it established NCAP. Customs reported that it spent $47.8 million on ACE as of the end of fiscal year 1997. In November 1997, Customs estimated it would cost $1.05 billion to develop, operate, and maintain ACE over the 15 years from fiscal years 1994 through 2008. Customs plans to deploy ACE to all 342 ports that handle commercial cargo imports. Customs plans to develop and deploy ACE in multiple phases. According to Customs, the first phase, known as NCAP, is to be an ACE prototype. Customs currently plans to deploy NCAP in four releases. The first is scheduled to be deployed for field evaluation at three locations beginning in May 1998, and the fourth is scheduled for October 1999. Customs, however, has not adhered to previous NCAP deployment schedules. Specifically, implementation of the NCAP prototype slipped from January 1997 to August 1997 and then again to a series of four releases beginning in October 1997, with the fourth release starting in June 1998. Customs also has several other efforts underway to modify or enhance existing information systems that support its six business areas. For example, in fiscal year 1998, Customs plans to spend about $3.7 million to enhance its Automated Export System (AES), which supports the outbound business area and is designed to improve Customs’ collection and reporting of export statistics and to enforce export regulations. In addition, Customs plans to spend another $4.6 million to modify its administrative systems supporting its finance and human resource business areas. Examples of other systems that Customs plans to modify or enhance are the Automated Commercial System, the Treasury Enforcement and Communication System, and the Seized Asset and Case Tracking System. In May 1996, we reported that Customs was not prepared to select an architecture and develop ACE because it was not effectively applying critical management practices that help organizations mitigate the risks associated with modernizing automated systems and better position themselves for success. Specifically, Customs (1) lacked clear accountability for ensuring successful implementation of NCAP requirements, (2) selected an information systems architecture for ACE and other systems without first analyzing its business requirements, (3) lacked policies and procedures to manage ACE and other systems as investments, and (4) did not ensure that systems under development adhere to Customs’ own system development policies. As a result of our recommendations, Customs took the following actions. Assigned day-to-day responsibility for implementing NCAP to the Assistant Commissioner, Office of Information and Technology. Initiated an effort, with contractor assistance, to develop an enterprise information systems architecture. Designated an information technology investment review board (IRB) and hired a contractor to develop investment management policies and procedures. The contractor completed its work in mid-1997 and the agency is in the process of implementing and institutionalizing these information technology investment management processes and procedures. Revised its Systems Development Life Cycle (SDLC), conducted ACE cost-benefit analyses, instituted SDLC compliance reviews, and prepared a variety of ACE-related project plans. Customs also developed processes to ensure that SDLC compliance is an ongoing activity. In May 1997, we reported that significant weaknesses continue to be identified during audits of Customs’ financial statements that hinder Customs’ ability to provide reasonable assurance that sensitive data maintained in automated systems, such as critical information used to monitor Customs’ law enforcement operations, are adequately protected from unauthorized access and modification. Since then, Treasury’s Inspector General has reported that Customs’ computer systems continue to be vulnerable to unauthorized access. Specifically, the Inspector General reported that security weaknesses could allow for unauthorized modification and deletion of application and systems software and data in Customs computer systems that support trade, financial management, and law enforcement activities. Treasury and Customs officials recognize that Customs’ systems architecture is not complete and plan to complete it. For five of its six business areas (outbound, passenger, finance, human resources, and investigations), Custom’s architecture does not (1) describe all the agency’s business functions, (2) outline the information needed to perform the functions, and (3) completely identify the users and locations of the functions. Further, while the architecture and related documentation describe business functions and users and locations for one business area (trade compliance), they do not identify the information needs and flows for all the functions. Nonetheless, Customs has defined many characteristics of its information systems’ hardware, software, communications, data management, and security components. Because these characteristics are not based on a complete understanding of its enterprisewide functional and information needs, Customs does not have adequate assurance that its information systems will optimally support its ability to (1) fully collect and accurately account for billions of dollars in annual federal revenue and (2) allow for the expeditious movement of legal goods and passengers across our nation’s borders while preventing and detecting the movement of illegal goods and passengers. Reflecting the general consensus in the industry that large, complex systems development and acquisition efforts should be guided by explicit architectures, we issued a report in 1992 defining a comprehensive framework for designing and developing systems architectures. This framework divides systems architectures into a logical component and a technical component. The logical component ensures that the systems meet the business needs of the organization. It provides a high-level description of the organization’s mission and target concept of operations; the business functions being performed and the relationships among functions; the information needed to perform the functions; the users and locations of the functions and information; and the information systems needed to support the agency’s business needs. An essential element of the logical architecture is the definition of the component interdependencies (e.g., information flows and interfaces). The technical component ensures that systems are interoperable, function together efficiently, and are cost-effective over their life cycles (including maintenance costs). The technical component details specific information technology and communications standards and approaches that will be used to build systems, including those that address critical hardware, software, communications, data management, security, and performance characteristics. TISAF, Treasury’s departmentwide architecture framework, is generally consistent with our framework. According to TISAF, a complete architecture has the following four components, each representing a different perspective or view of the agency: Functional: A representation of what the organization does (i.e., its mission and business processes) and how the organization can use information systems to support its business operations. Work: A description of where and by whom information systems are to be used throughout the agency. Information: A description of what information is needed to support business operations. Infrastructure: A description of the hardware and “services” (e.g., software and telecommunications) needed to implement information systems across the agency. TISAF’s functional, work, and information components together form the logical view of the architecture, while its infrastructure represents the technical view of the architecture. To develop and evolve systems that effectively support business functions, a top-down process must be followed. The logical architecture (e.g., business functions and information flows) is defined first and then used to specify supporting systems (e.g., interfaces, standards, and protocols). Treasury endorses this top-down approach. Treasury officials responsible for developing and implementing TISAF stated that development of the architecture begins with defining and describing the agency’s major business functions. Once this is accomplished, the agency can identify the relationships among the functions, the information needed to perform the functions, the users and locations of the functions, and the existing and needed applications and related information technology required to execute and support the business functions. According to Treasury guidance, the architecture’s infrastructure component (i.e., its systems specifications and standards) should be derived from the other three components. In addition, the guidance states that each element of the architecture must be integrated and traceable, and the relationships between them must be explicit. Customs does not have a complete systems architecture to effectively and efficiently guide and constrain the millions of dollars it invests each year in developing, acquiring, and maintaining the information systems that support its six business areas. In summary, for five of Customs’ six business areas (outbound, passenger, finance, human resources, and investigations), the architecture neither defines all critical business functions nor identifies all information needs (including information security) and information flows within and among the business areas. For the sixth business area (trade compliance), Customs has defined all the business functions and users and work locations and some, but not all, of the information and data needs and flows. With respect to the business functions, Customs’ architecture provides descriptions of only 29 of 79 collective functions in its six business areas. The architecture does not describe the other 50 functions in sufficient detail to understand what they are, how they relate, who will perform them, where they will be performed, what information they will produce or consume, and how the information should be handled (i.e., captured, stored, processed, managed, distributed, and protected). Table 1 summarizes by business area the number of functions defined in the architecture. Examples of undefined functions in the outbound, passenger, investigations, and human resources business areas are as follows: Outbound: The architecture names “examine cargo” and “seize and process cargo” as 2 of the 13 functions in this business area. However, the architecture does not describe how to examine cargo, what cargo to examine, when to examine cargo, what information/data is needed to examine cargo, how the results of the cargo examination are used and by whom, or how cargo examination data should be protected. Similarly, the architecture does not describe when cargo will be seized and by whom, what criteria are used to seize cargo, how cargo will be seized and accounted for, or what information is required to account for the seized cargo (e.g., date of seizure, company name, and commodity). Passenger: The architecture names “identify compliance target” and “process non-compliant passengers/conveyances” as 2 of the 13 functions in this business area. However, the architecture does not describe how targets are identified, who identifies targets, how target information is disseminated, what information is collected to determine compliance, or how target information needs to be protected. Likewise, the architecture does not define compliant passenger/conveyance, how passengers are processed and by whom, or where passengers/conveyances are processed. Investigations: The architecture names “perform interdiction” as 1 of the 10 functions in this business area. However, the architecture does not describe how an interdiction is conducted, who conducts interdictions, what criteria are used to identify potential passengers or cargo to interdict, what happens to the seized persons or cargo, or how interdiction information needs to be protected. Human Resources: The architecture names “manage internal service programs” as 1 of the 22 functions in this business area. However, the architecture does not describe what services are provided and by whom, who is eligible to receive the services, or where the potential recipients are located. Within the trade compliance business area, even though Customs’ architecture does not define 10 of 15 trade compliance functions, Customs has described these 10 business functions, the relationships among them, and the work to be performed within each function (including who will perform the work and where it will be performed) in documents other than the architecture. Further, Customs has specified the data needed to support some, but not all, of the trade compliance functions. For example, Customs identified key information sources (such as cargo manifests and summary declarations) associated with NCAP, the ACE prototype that covers a subset of trade compliance activities, and specific data elements associated with each information source. Customs, however, has not defined the information/data needs, including security, and information/data flows among its six business areas. With respect to information security in particular, Customs’ architecture does not (1) specify functional requirements for enterprisewide security, (2) include a security concept of operations that describes how Customs will operate (e.g., what controls will be used) to satisfy these requirements, or (3) include a security subarchitecture that specifies how these controls will be implemented, certified, and accredited and how the controls’ operational effectiveness will be validated. Given that computer security continues to be a long-standing problem at Customs, this issue is particularly troubling. In our audits of Customs’ fiscal year 1992 and 1993 principal financial statements, we stated that Customs’ controls to prevent and detect unauthorized access and intentional or inadvertent unauthorized modifications to critical and sensitive data and computer programs were ineffective, thereby jeopardizing the security and reliability of the operations central to Customs’ mission. While Customs has since taken meaningful steps toward correcting these access problems, they still remain. According to the Treasury Inspector General’s report on Customs’ fiscal years 1997 and 1996 financial statements, computer security weaknesses continue to exist that could allow for unauthorized modification and deletion of application and systems software and data in Customs’ systems supporting the trade, financial management, and law enforcement activities. Until Customs addresses these weaknesses, it will not know the full extent of inter- and intra-business area functional and informational needs and dependencies and thus cannot develop, acquire, and maintain supporting information systems that optimally support the agency’s operations and activities. Moreover, until these interdependencies among and within business areas have been fully analyzed and defined and an approach for securing the associated information has been established, the opportunities for incompatibilities and duplications among systems and the information they process and share increase, as do the opportunities for unauthorized access and modification of data. Such opportunities jeopardize, in turn, the completeness, consistency, and integrity of the data Customs uses and publishes. Given the importance of reliable data to Customs’ (1) billion dollar revenue collection mission, (2) trade statistics used in developing trade policy and negotiating trade agreements, and (3) efforts to prevent and detect the illegal movement of goods and services across our nation’s borders, such risks must be effectively addressed through an enterprise systems architecture. With respect to the infrastructure or technical component of Customs’ architecture, Customs has specified much of the information that Treasury guidance states should be included in this component (e.g., standards for system and application software, communication interfaces, and hardware). However, as noted previously, this component is not based on a complete analysis of Customs’ functional and information needs. For example, the architecture does not address information security requirements, yet its infrastructure specifies network encryption and remote access server products. Because it specified these products without knowing the business needs they support, Customs does not have adequate assurance that these products are needed or that they satisfy its true business needs, minimally or optimally. That is, the list of products cited may be either unnecessary or insufficient to support its real business needs. Experience has shown that attempting to define and build major systems without first completing a systems architecture unnecessarily increases the cost and complexity of these systems. For example, we reported that FAA’s lack of a complete architecture resulted in incompatibilities among its air traffic control systems that (1) required higher- than-need-be system development, integration, and maintenance costs and (2) reduced overall system performance. Without having architecturally defined requirements and standards governing information and data structures and communications, FAA was forced to spend over $38 million to acquire a system dedicated to overcoming incompatibilities between systems. According to a Customs’ contractor, Customs is also experiencing such inefficiencies and unnecessary costs because it lacks an architecture. Specifically, this contractor reported that in the absence of an enterprise infrastructure, Customs’ departments have developed and implemented incompatible systems, which has increased modernization risks and implementation costs. Customs awarded a contract in January 1997 to develop, among other things, a “technology architecture.” However, Customs did not properly define the scope of this architecture, limiting it to deliverables associated with the infrastructure component without first completing the other components. Customs officials stated that they contracted for the infrastructure without first completing the higher levels of the architecture because they considered the infrastructure component to be the most important and urgently needed part of the architecture. This “bottom up” approach is fundamentally inconsistent with government and industry architectural frameworks and guidance, including Treasury’s, and has historically resulted in systems that do not effectively support business operations and waste time and money. For example, after the Internal Revenue Service (IRS) spent over $3 billion attempting to modernize its tax systems without a defined logical architecture, it could not demonstrate benefits commensurate with costs and was forced to significantly restructure the effort. Unless it completes its architecture before attempting to develop operational systems like ACE, Customs runs the risk of repeating failures like those that IRS experienced. Customs’ CIO officials have since acknowledged the need for a complete systems architecture and its value in information technology investment management. Accordingly, Customs is developing a statement of work for a TISAF-compliant architecture. With the help of a contractor, Customs plans to use whatever data each business area may have already developed relative to functional, work, and information needs as a starting point in completing an enterprise architecture. More specifically, by October 1998, Customs plans to identify the functional, work, and information components for each of the six business areas and identify the relationships and interdependencies across the business areas. Customs also plans to reevaluate its enterprise infrastructure. If an architecture is to be implemented effectively, institutional processes must be established to (1) require system compliance with the architecture, (2) assess and enforce such compliance, and (3) waive this requirement only on the basis of careful, thorough, and documented analysis showing that such deviation is warranted. According to Customs officials, architectural compliance will be assessed and enforced as Customs implements its recently defined investment management process. Under this process, Customs’ investment review board (IRB) uses four criteria in scoring competing investment options and allocating funding among them. The four criteria are risk (e.g., technical, schedule, and cost); strategic alignment (e.g., cross-functional benefits, linkage to Customs’ business plan, and compliance with legislative mandates); mission effectiveness (e.g., contributions to service delivery); and cost/benefit ratio (e.g., tangible and intangible benefits, and costs). Customs is in the process of implementing its investment management process for the fiscal year 1999 budget cycle. According to Customs’ investment management process, investment compliance with the architecture is considered, but not required, under the technical risk criterion. As a result, the process does not preclude funding projects that do not comply with the enterprise architecture and does not require that deviations from the architecture be rigorously justified. According to Customs officials, while architectural compliance is not an explicit criterion in the process, it will be considered and documented as part of the IRB funding decisions. Without an effective, well-defined process for enforcing the architecture, Customs runs the risk that unjustified deviations from the architecture will occur, resulting in systems that do not meet business needs, are incompatible, perform poorly, and cost more to develop, integrate, and maintain than they should. For example, we reported that FAA’s lack of an enforced systems architecture for its air traffic control operations resulted in the use of expensive interfaces to translate different data communication protocols, thus complicating and slowing communications, and the proliferation of multiple application programming languages, which increased software maintenance costs and precluded sharing software components among systems. Customs’ incomplete enterprise information systems architecture and limitations in its plans for enforcing compliance with an architecture once one is completed impair the agency’s ability to effectively and efficiently develop or acquire operational systems, such as ACE, and to maintain existing systems. Until Customs (1) performs the thorough analysis and careful decision-making associated with developing all architectural components for interdependent business areas and (2) ensures that these results are rigorously enforced for its information system development, acquisition, and maintenance efforts, it runs the risk of wasting scarce time and money building and maintaining systems that do not effectively and efficiently support its business operations. To ensure that the Customs Service develops and effectively enforces a complete enterprise information systems architecture, we recommend that the Commissioner of Customs direct the Customs CIO, in consultation with the Treasury CIO, to follow through on plans to complete the enterprise information systems architecture. At a minimum, the architecture should (1) describe Customs’ target business operations, (2) fully define Customs’ interrelated business functions to support these target operations, (3) clearly describe information needs (including security) and flows among these functions, (4) identify the systems that will provide these functions and support these information needs and flows, and (5) use this information to specify the technical standards and related characteristics that these systems should possess to ensure that they interoperate, function together efficiently, and are cost-effective to maintain. We also recommend that the Commissioner direct the Deputy Commissioner, as Chairman of the IRB, to establish compliance with the architecture as an explicit requirement of Customs’ investment management process except in cases where careful, thorough, and documented analysis supports a waiver to this requirement. In commenting on a draft of this report, Customs agreed with our conclusions and recommendations and stated that it will (1) develop an enterprise systems architecture in accordance with TISAF and in close cooperation with Treasury during fiscal year 1998 and (2) strengthen enforcement of the architecture by being explicit that projects must comply with the architecture and requiring exceptions to be well justified. Additionally, Customs committed to not making major system investments prior to developing a TISAF-compliant architecture. Customs raised several additional matters related to systems architecture, none of which affect our conclusions and recommendations and thus are not discussed here. Customs’ comments and our responses are reprinted in appendix II. We are sending copies of this report to the Ranking Minority Members of the Subcommittee on Treasury and General Government, Senate Committee on Appropriations, and Subcommittee on Treasury, Postal Service, and General Government, House Committee on Appropriations. We are also sending copies to the Secretary of the Treasury, the Commissioner of Customs, and the Director of the Office of Management and Budget. Copies will also be made available to others upon request. If you have any questions about this letter, please contact me at (202) 512-6240 or by e-mail at [email protected]. Major contributors to this report are listed in appendix III. To accomplish the first objective, we reviewed published architectural guidance, including the Treasury Information Systems Architecture Framework (TISAF), to identify key requirements. We also interviewed officials from Treasury’s Office of the Deputy Assistant Secretary for Information Systems and Chief Information Officer (the organization responsible for developing, implementing, and maintaining TISAF) to seek clarification and explanation of TISAF requirements. Further, we asked Customs to give us its enterprise information systems architecture and a mapping of all architectural documents to TISAF’s four architectural components—functional, work, information, and infrastructure. In response, Customs provided the documents listed in table I.1. Customs subsequently provided two additional architecture documents that it did not map to any TISAF component. The two additional documents were the ACE Technical Architecture and the Enterprise IT Architecture Strategy-Executive Overview. We then analyzed the architecture documents Customs provided to identify any variances with the TISAF requirements for each architectural component. We also interviewed Customs and supporting contractor officials to (1) seek clarification and explanation of the content of the architecture documents, (2) identify instances where the architectural documents did not satisfy TISAF requirements, and (3) solicit from Customs any additional evidence related to meeting TISAF requirements. To address the second objective, we reviewed Customs’ policies and procedures governing information technology investment management to determine architecture enforcement processes and interviewed Customs officials to determine organizational roles and responsibilities related to architecture development and enforcement. We also discussed with Customs officials any plans for changing the agency’s processes and organizational responsibilities for developing and enforcing the architecture. The following are GAO’s comments on the U.S. Customs Service’s letter dated March 31, 1998. 1. Our report neither states nor implies that Customs is unable to ensure the proper collection and allocation of revenues totaling about $19 billion annually. Rather, the report states that one of ACE’s key functions is to ensure the proper collection and allocation of revenues totaling about $19 billion annually. 2. Customs states that it began developing its enterprise systems architecture prior to Treasury’s publication of TISAF and is working with Treasury to develop a TISAF-compliant architecture. While these statements are true, they do not address our point that Customs’ architecture is insufficiently complete to be useful in guiding and constraining major systems investments. In order to optimize systems investments, the architecture must specify the six elements cited in our report. Furthermore, each element of the architecture must be built upon the preceding ones. Customs’ architecture does not include these elements for all business areas and, as we point out in our report, the systems and standards selected were not based on a complete analysis of Customs’ functional and information needs. We do not agree with Customs’ statement that an architecture is never completed. An architecture must be complete (i.e., include the six elements described in our report) to be useful in building or buying systems. This does not mean that a completed architecture cannot be modified to reflect changes in organizational missions and business functions or advancements in information technology products. This process of thoughtful and disciplined change—maintenance—is performed routinely on all information system components (e.g., architectures, documentation, software, and hardware). 3. While we agree that architectural models used in industry and government vary, all models consistently require the top-down, structured approach described in our report. Customs has not followed this approach and, therefore, does not have adequate assurance that its infrastructure (i.e., technical architecture) will meet its business requirements. Customs states that it has been cautioned against defining an architecture in too much detail lest the business process changes before system development can proceed, but it does not clearly define what it means by too much detail. Customs’ architecture neither defines all critical business functions nor identifies all information needs and flows within and among the business areas for five of its six business areas. As a result, rather than being overly detailed, it lacks the basic, required elements. 4. While the Treasury Inspector General (IG) has given Customs an unqualified opinion in fiscal year 1997, the IG also reported that Customs lacks adequate assurance that all revenue due is collected and compliance with other trade laws is achieved. Despite the progress that has been made, this lack of assurance has been a persistent issue since we reported on our audit on Customs’ financial statements for fiscal year 1992.5. Customs states that we have inaccurately characterized the completeness of its architecture for the finance business area because certain finance business functions have been defined in various other analyses, reports, and strategies. This assertion reflects a misunderstanding of the purpose and value of a systems architecture. Our report concludes that Customs’ architecture for its finance business area (as well as all but one other business area) is substantially incomplete because it does not (1) describe all the agency’s business functions, (2) outline the information needed to perform the functions, or (3) completely identify the users and locations of the functions. Even if other documents contain fragments of the missing information for one business area, which we did not attempt to verify, this does not mitigate the need for a single, comprehensive, maintainable, and enforceable statement of architectural requirements and standards. Rona Stillman, Chief Scientist for Computers and Telecommunications Linda Koontz, Associate Director Randolph Hite, Senior Assistant Director Deborah A. Davis, Assistant Director Madhav Panwar, Senior Technical Advisor Mark Bird, Assistant Director The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Customs Service's enterprise information systems architecture, focusing on determining whether: (1) the architecture is complete; and (2) Customs has processes and procedures to enforce compliance with the architecture. GAO noted that: (1) Customs does not yet have a complete enterprise information systems architecture to guide and constrain the millions of dollars it spends annually to develop and acquire new information systems and evolve existing ones; (2) for five of its six business areas Custom's architecture does not: (a) describe all the agency's business functions; (b) define the information needed to perform the functions; and (c) completely identify the users and locations of the functions; (3) while the architecture and related documentation describe business functions, and users and work locations for the sixth business area, they do not identify all the information needs and flows for all the trade functions; (4) also, Customs has named certain technical standards, products, and services that it will use in building systems to support all its business areas; (5) however, Customs has not chosen these based on a complete description of its business needs; (6) the limitations in Customs' architecture are rooted in its decision to focus on defining the technical characteristics of its systems environment; (7) Customs' view does not include the logical characteristics of its enterprise system environment, which would enable it to define and implement systems that optimally support the agency's mission needs; (8) Customs plans to develop the architecture in accordance with Department of the Treasury architectural guidance; (9) specifically, Customs plans to define its functional, information, and work needs and their interrelationships across its six business areas and, in light of these needs and interrelationships, reevaluate the technical characteristics it has selected for its systems environment; (10) until Customs defines the logical characteristics of its business environment and uses them to establish technical standards and approaches, it does not have adequate assurance that the systems it plans to build and operationally deploy will effectively support the agency's business needs; (11) Customs also has not developed and implemented effective procedures to enforce its architecture once it is completed; (12) Customs officials stated that a newly established investment management process will be used to enforce architectural compliance; (13) this process, however, does not require that system investments be architecturally compliant or that architectural deviations be justified and documented; and (14) as a result, Customs risks incurring the same problems as other federal agencies that have not effectively defined and enforced an architecture. |
Virtually all federal operations are supported by automated systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions and account for their resources without these information assets. Therefore, it is important for agencies to safeguard their systems against risks such as loss or theft of resources (such as federal payments and collections), modification or destruction of data, and unauthorized uses of computer resources or to launch attacks on other computer systems. Sensitive information, such as taxpayer data, Social Security records, medical records, and proprietary business information could be inappropriately disclosed, browsed, or copied for improper or criminal purposes. Critical operations could be disrupted, such as those supporting national defense and emergency services or agencies’ missions could be undermined by embarrassing incidents, resulting in diminished confidence in their ability to conduct operations and fulfill their responsibilities. Cyber threats to federal systems and critical infrastructures can be unintentional and intentional, targeted or nontargeted, and can come from a variety of sources. Unintentional threats can be caused by software upgrades or maintenance procedures that inadvertently disrupt systems. Intentional threats include both targeted and nontargeted attacks. A targeted attack is when a group or individual specifically attacks a critical infrastructure system. A nontargeted attack occurs when the intended target of the attack is uncertain, such as when a virus, worm, or malware is released on the Internet with no specific target. The Federal Bureau of Investigation has identified multiple sources of threats to our nation’s critical information systems, including foreign nation states engaged in information warfare, domestic criminals, hackers, virus writers, and disgruntled employees working within an organization. Table 1 summarizes those groups or individuals that are considered to be key sources of cyber threats to our nation’s information systems and infrastructures. As federal information systems increase their connectivity with other networks and the Internet and as the system capabilities continue to increase, federal systems will become increasingly more vulnerable. Data from the National Vulnerability Database, the U.S. government repository of standards-based vulnerability management data, showed that, as of March 6, 2008, there were about 29,000 security vulnerabilities or software defects that can be directly used by a hacker to gain access to a system or network. On average, close to 18 new vulnerabilities are added each day. Furthermore, the database revealed that more than 13,000 products contained security vulnerabilities. These vulnerabilities become particularly significant when considering the ease of obtaining and using hacking tools, the steady advances in the sophistication and effectiveness of attack technology, and the emergence of new and more destructive attacks. Thus, protecting federal computer systems and the systems that support critical infrastructures has never been more important. FISMA sets forth a comprehensive framework for ensuring the effectiveness of security controls over information resources that support federal operations and assets. FISMA’s framework creates a cycle of risk management activities necessary for an effective security program, and these activities are similar to the principles noted in our study of the risk management activities of leading private sector organizations—assessing risk, establishing a central management focal point, implementing appropriate policies and procedures, promoting awareness, and monitoring and evaluating policy and control effectiveness. More specifically, FISMA requires the head of each agency to provide information security protections commensurate with the risk and magnitude of harm resulting from the unauthorized access, use, disclosure, disruption, modification or destruction of information and information systems used or operated by the agency or on behalf of the agency. In this regard, FISMA requires that agencies implement information security programs that, among other things, include periodic assessments of the risk; risk-based policies and procedures; subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate; security awareness training for agency personnel, including contractors and other users of information systems that support the operations and assets of the agency; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, performed with a frequency depending on risk, but no less than annually; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies; procedures for detecting, reporting, and responding to security incidents; plans and procedures to ensure continuity of operations. In addition, agencies must develop and maintain an inventory of major information systems that is updated at least annually and report annually to the Director of OMB and several Congressional Committees on the adequacy and effectiveness of their information security policies, procedures, and practices and compliance with the requirements of the act. OMB and agency IGs also play key roles under FISMA. Among other responsibilities, OMB is to develop policies, principles, standards, and guidelines on information security and is required to report annually to Congress on agency compliance with the requirements of the act. OMB has provided instructions to federal agencies and their IGs for preparing annual FISMA reports. OMB’s reporting instructions focus on performance metrics related to the performance of key control activities such as developing a complete inventory of major information systems, providing security training to personnel, testing and evaluating security controls, testing contingency plans, and certifying and accrediting systems. Its yearly guidance also requires agencies to identify any physical or electronic incidents involving the loss of, or unauthorized access to, personally identifiable information. FISMA also requires agency IGs to perform an independent evaluation of the information security programs and practices of the agency to determine the effectiveness of such programs and practices. Each evaluation is to include (1) testing of the effectiveness of information security policies, procedures, and practices of a representative subset of the agency’s information systems and (2) assessing compliance (based on the results of the testing) with FISMA requirements and related information security policies, procedures, standards, and guidelines. These required evaluations are then submitted by each agency to OMB in the form of an OMB-developed template that summarizes the results. In addition to the template submission, OMB encourages agency IGs to provide any additional narrative in an appendix to the report to the extent they provide meaningful insight into the status of the agency’s security or privacy program. Major federal agencies have continued to report steady progress over the past several years in performing information security control activities, although IGs at several agencies identified inconsistencies with reported information. According to OMB and agency FISMA reports, the federal government continued to improve information security performance in fiscal year 2007 relative to key performance metrics established by OMB. For fiscal year 2007, IGs reported that more agencies had completed approximately 96-100 percent of their inventories and the governmentwide percentage of employees with significant security responsibilities who received specialized training increased. Percentages also increased for systems that had been tested and evaluated at least annually, systems with tested contingency plans, and systems that had been certified and accredited. However, agencies reported a decline in the percentage of employees and contractors who received security awareness training (see fig. 1). In addition, IGs at several agencies sometimes disagreed with the information reported by the agency and have identified weaknesses in the processes used to implement these and other security program activities. In fiscal year 2007, 24 major federal agencies reported a total of 10,285 systems, composed of 8,933 agency and 1,352 contractor systems. Table 2 summarizes the number of agency and contractor systems by system impact level. IGs reported that 19 agencies had completed approximately 96-100 percent of their inventories, an increase from 18 agencies in 2006. However, IGs identified problems with system inventories at several agencies. For example, three agency IGs did not agree with the reported number of agency systems or systems operated by a contractor or another organization on the agency’s behalf and one IG for a large agency reported that it did not agree with the number of agency owned systems. Additionally, one agency IG identified discrepancies in the number of system interfaces and interconnections reported and one IG reported the agency lacked procedures to ensure contractor systems are identified. Without complete and accurate inventories, agencies cannot effectively maintain and secure their systems. In addition, the performance measures used to assess agencies’ progress may not accurately reflect the extent to which these security practices have been implemented. Overall, agencies reported a decline in the percentage of employees and contractors receiving security awareness training. According to agency FISMA reports, 84 percent of total employees and contractors governmentwide received security awareness training in fiscal year 2007, a decrease from 2006 in which 91 percent of employees and contractors governmentwide received security awareness training. However, 10 agencies reported increasing percentages of employees and contractors receiving security awareness training and five other agencies continue to report that 100 percent of their employees and contractors received security awareness training. In addition, each agency reported it had explained policies regarding peer-to-peer file sharing in security awareness training, ethics training, or other agencywide training. Governmentwide, agencies reported an increasing percentage of employees with significant security responsibilities who received specialized training. In fiscal year 2007, 90 percent of these employees had received specialized training, compared with 86 percent in fiscal year 2006. Although the majority of agencies reported improvements in both the percentage of employees and contractors receiving security awareness training and the percentage of employees with significant security responsibilities who received specialized training, several did not. For example, nine agencies reported a decrease in the percentage of employees and contractors who received security awareness training. In addition, several IGs reported weaknesses in agencies security awareness and training efforts. For example, one IG reported that the agency was unable to ensure that contractors received security awareness training and another IG reported that the agency security awareness program needs to increase employees’ awareness of social engineering techniques and the importance of protecting their usernames and passwords as a result of successful social engineering attempts. Two agency IGs also noted that weaknesses exist in ensuring that all employees who have specialized responsibilities receive specialized training. Further, eight agency IGs disagree with the percentage of individuals that their agency reported as having received security awareness training. Figure 2 shows a comparison between agency and IG reporting of the percentage of employees receiving security awareness training. Failure to provide up-to-date information security awareness training could contribute to the information security problems at agencies. In 2007, federal agencies reported testing and evaluating security controls for 95 percent of their systems, up from 88 percent in 2006. The number of agencies that reported testing and evaluating 90 percent or more of their systems also increased from 16 in 2006 to 23 in 2007. However, IGs reported shortcomings in agency procedures for testing and evaluating security controls at several agencies. For example, 11 IGs reported that their agency did not always ensure that information systems used or operated by a contractor met the requirements of FISMA, OMB policy, NIST guidelines, national security policy, and agency policy. In addition, two IGs reported that agencies did not conduct their annual assessments using current NIST guidance. As a result, these agencies may not have reasonable assurance that controls are implemented correctly, are operating as intended, and producing the desired outcome with respect to meeting the security requirements of the agency. In addition, agencies may not be fully aware of the security control weaknesses in their systems, thereby leaving the agencies’ information and systems vulnerable to attack or compromise. Federal agencies reported that 86 percent of total systems had contingency plans that had been tested, an increase from 77 percent in 2006. However, as we reported in 2006, high-risk systems continue to have the smallest percentage of tested contingency plans—only 77 percent of high-risk systems had tested contingency plans. In contrast, agencies had tested contingency plans for 90 percent of moderate-risk systems, 85 percent of low-risk systems, and 91 percent of uncategorized systems (see fig. 3). Two IGs reported that systems for their agencies were not tested in accordance with federal government requirements. Without developing and testing contingency plans, agencies have limited assurance that they will be able to recover mission-critical applications, business processes, and information in the event of an unexpected interruption. Federal agencies continue to report an increasing percentage of systems that have been certified and accredited. For fiscal year 2007, 92 percent of agencies’ systems governmentwide were reported as certified and accredited, as compared with 88 percent in 2006. In addition, agencies reported certifying and accrediting 95 percent of their high-risk systems, an increase from 89 percent in 2006. Although agencies reported increases in the overall percentage of systems certified and accredited, IGs reported that several agencies continued to experience shortcomings in the quality of their certification and accreditation process. As figure 4 depicts, five IGs rated their agencies’ certification and accreditation process as poor or failing, including three agencies that reported over 90 percent of their systems as certified and accredited. In addition, IGs at six agencies identified specific weaknesses with key documents in the certification and accreditation process such as risk assessments, testing and evaluation, and security plans not being consistent with NIST guidance or finding those items missing from certification and accreditation packages. In other cases where systems were certified and accredited, IGs noted that contingency plans and security controls were not tested annually and security controls were not fully tested and evaluated when significant changes were made to agency systems. Additionally, one agency IG noted that the agency does not follow a formally established and documented process for certification and accreditation. As a result, reported certification and accreditation progress may not be providing an accurate reflection of the actual status of agencies’ implementation of this requirement. Furthermore, agencies may not have assurance that accredited systems have controls in place that properly protect those systems. Agencies had not always implemented security configuration policies. Twenty-three of the major federal agencies reported that they had an agencywide security configuration policy. Although the IGs agreed that their agency had such a policy, several IGs did not agree to the extent to which their agencies implemented the policies or applied the common security configurations as established by NIST. In addition, only seven agencies reported that they complied with NIST security configuration requirements 96 percent or more of the time. If minimally acceptable configuration requirements policies are not properly implemented to systems, agencies will not have assurance that products are configured adequately to protect those systems, which could increase their vulnerability and make them easier to compromise. As we have previously reported, not all agencies had developed and documented policies and procedures reflecting OMB guidance on protection of personally identifiable information that is either accessed remotely or physically transported outside an agency’s secured physical perimeter. Of the 24 major agencies, 22 had developed policies requiring personally identifiable information to be encrypted on mobile computers and devices. Fifteen of the agencies had policies to use a “time-out” function for remote access and mobile devices requiring user reauthentication after 30 minutes of inactivity. Fewer agencies (11) had established policies to log computer-readable data extracts for databases holding sensitive information and erase the data within 90 days after extraction. Several agencies indicated that they were researching technical solutions to address these issues. Furthermore, four IGs reported agencies’ progress of implementing OMB guidance as poor or failing and at least 14 IGs reported weaknesses in agencies’ implementation of OMB guidance related to the protection of PII. Gaps in their policies and procedures reduce agencies’ ability to protect personally identifiable information from improper disclosure. Shortcomings exist in agencies’ security incident reporting procedures. According to OMB, the number of incidents reported by agencies in their annual FISMA reports continued to fluctuate dramatically from the prior year. The majority of IGs reported that these agencies followed documented procedures for identifying and reporting incidents internally, to US-CERT, and to law enforcement. However, five IGs noted that the agency was not following procedures for internal incident reporting, two noted that their agency was not following reporting procedures to US- CERT, and one noted that the agency was not following reporting procedures to law enforcement. Several IGs also noted specific weaknesses in incident procedures such as components not reporting incidents reliably or consistently, components not keeping records of incidents, and incomplete or inaccurate incident reports. Without properly accounting for and analyzing security problems and incidents, agencies risk losing valuable information needed to prevent future exploits and understand the nature and cost of threats directed at the agency. IGs reported weaknesses in their agency’s remediation process. According to IG assessments, 10 of the 24 major agencies did not almost always incorporate information security weaknesses for all systems into their remediation plans. Twelve IGs found that vulnerabilities from reviews were not always included in remedial action plans and 10 IGs found that agencies were not always prioritizing weaknesses to help ensure they are addressed in a timely manner. Without a sound remediation process, agencies cannot be assured that information security weaknesses are efficiently and effectively corrected. Our work and that of IGs show that significant weaknesses continue to threaten the confidentiality, integrity, and availability of critical information and information systems used to support the operations, assets, and personnel of federal agencies. In their fiscal year 2007 performance and accountability reports, 20 of 24 major agencies indicated that inadequate information security controls were either a significant deficiency or a material weakness for financial statement reporting (see fig. 5). Our audits continue to identify similar conditions in both financial and non-financial systems, including agencywide weaknesses as well as weaknesses in critical federal systems. Persistent weaknesses appear in five major categories of information system controls: (1) access controls, which ensure that only authorized individuals can read, alter, or delete data; (2) configuration management controls, which provide assurance that only authorized software programs are implemented; (3) segregation of duties, which reduces the risk that one individual can independently perform inappropriate actions without detection; (4) continuity of operations planning, which provides for the prevention of significant disruptions of computer-dependent operations; and (5) an agencywide information security program, which provides the framework for ensuring that risks are understood and that effective controls are selected and properly implemented. Figure 6 shows the number of major agencies with weaknesses in these five areas. A basic management control objective for any organization is to protect data supporting its critical operations from unauthorized access, which could lead to improper modification, disclosure, or deletion of the data. Access controls, which are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities, can be both electronic and physical. Electronic access controls include use of passwords, access privileges, encryption, and audit logs. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. Most agencies did not implement controls to sufficiently prevent, limit, or detect access to computer networks, systems, or information. Our analysis of IG, agency, and our own reports uncovered that agencies did not have adequate controls in place to ensure that only authorized individuals could access or manipulate data on their systems and networks. To illustrate, 23 of 24 major agencies reported weaknesses in such controls. For example, agencies did not consistently (1) identify and authenticate users to prevent unauthorized access, (2) enforce the principle of least privilege to ensure that authorized access was necessary and appropriate, (3) establish sufficient boundary protection mechanisms, (4) apply encryption to protect sensitive data on networks and portable devices, and (5) log, audit, and monitor security-relevant events. Agencies also lacked effective controls to restrict physical access to information assets. We previously reported that many of the data losses occurring at federal agencies over the past few years were a result of physical thefts or improper safeguarding of systems, including laptops and other portable devices. In addition to access controls, other important controls should be in place to protect the confidentiality, integrity, and availability of information. These controls include the policies, procedures, and techniques for ensuring that computer hardware and software are configured in accordance with agency policies and that software patches are installed in a timely manner; appropriately segregating incompatible duties; and establishing plans and procedures to ensure continuity of operations for systems that support the operations and assets of the agency. However, 22 agencies did not always configure network devices and services to prevent unauthorized access and ensure system integrity, or patch key servers and workstations in a timely manner. In addition, 18 agencies did not always segregate incompatible duties to different individuals or groups so that one individual does not control all aspects of a process or transaction. Furthermore, 23 agencies did not always ensure that continuity of operations plans contained all essential information or were sufficiently tested. Weaknesses in these areas increase the risk of unauthorized use, disclosure, modification, or loss of information. An underlying cause for information security weaknesses identified at federal agencies is that they have not yet fully or effectively implemented all the FISMA-required elements for an agencywide information security program. An agencywide security program, required by FISMA, provides a framework and continuing cycle of activity for assessing and managing risk, developing and implementing security policies and procedures, promoting security awareness and training, monitoring the adequacy of the entity’s computer-related controls through security tests and evaluations, and implementing remedial actions as appropriate. Our analysis determined that 21 of 24 major federal agencies had weaknesses in their agencywide information security programs. Our recent reports illustrate that agencies often did not adequately design or effectively implement policies for elements key to an information security program. We identified weaknesses in information security program activities, such as agencies’ risk assessments, information security policies and procedures, security planning, security training, system tests and evaluations, and remedial actions. For example, One agency’s risk assessment was completed without the benefit of an inventory of all the interconnections between it and other systems. In another case, an agency had assessed and categorized system risk levels and conducted risk assessments, but did not identify many of the vulnerabilities we found and had not subsequently assessed the risks associated with them. Agencies had developed and documented information security policies, standards, and guidelines for information security, but did not always provide specific guidance for securing critical systems or implement guidance concerning systems that processed Privacy Act-protected data. Security plans were not always up-to-date or complete. Agencies did not ensure all information security employees and contractors, including those who have significant information security responsibilities, received sufficient training. Agencies had tested and evaluated information security controls, but their testing was not always comprehensive and did not identify many of the vulnerabilities we identified. Agencies did not consistently document weaknesses or resources in remedial action plans. As a result, agencies do not have reasonable assurance that controls are implemented correctly, operating as intended, or producing the desired outcome with respect to meeting the security requirements of the agency, and responsibilities may be unclear, misunderstood, and improperly implemented. Furthermore, agencies may not be fully aware of the security control weaknesses in their systems, thereby leaving their information and systems vulnerable to attack or compromise. Consequently, federal systems and information are at increased risk of unauthorized access to and disclosure, modification, or destruction of sensitive information, as well as inadvertent or deliberate disruption of system operations and services. In prior reports, we and the IGs have made hundreds of recommendations to agencies to address specific information security control weaknesses and program shortfalls. Until agencies effectively and fully implement agencywide information security programs, including addressing the hundreds of recommendations that we and IGs have made, federal information and information systems will not be adequately safeguarded to prevent their disruption, unauthorized use, disclosure, or modification. The need for effective information security policies and practices is further illustrated by the number of security incidents experienced by federal agencies that put sensitive information at risk. Personally identifiable information about millions of Americans has been lost, stolen, or improperly disclosed, thereby potentially exposing those individuals to loss of privacy, identity theft, and financial crimes. Reported attacks and unintentional incidents involving critical infrastructure systems demonstrate that a serious attack could be devastating. Agencies have experienced a wide range of incidents involving data loss or theft, computer intrusions, and privacy breaches, underscoring the need for improved security practices. These incidents illustrate that a broad array of federal information and critical infrastructures are at risk. The Department of Veterans Affairs (VA) announced that computer equipment containing personally identifiable information on approximately 26.5 million veterans and active duty members of the military was stolen from the home of a VA employee. Until the equipment was recovered, veterans did not know whether their information was likely to be misused. VA sent notices to the affected individuals that explained the breach and offered advice concerning steps to reduce the risk of identity theft. The equipment was eventually recovered, and forensic analysts concluded that it was unlikely that the personal information contained therein was compromised. The Transportation Security Administration (TSA) announced a data security incident involving approximately 100,000 archived employment records of individuals employed by the agency from January 2002 until August 2005. An external hard drive containing personnel data, such as Social Security number, date of birth, payroll information, and bank account and routing information, was discovered missing from a controlled area at the TSA Headquarters Office of Human Capital. A contractor for the Centers for Medicare and Medicaid Services reported the theft of one of its employee’s laptop computer from his office. The computer contained personal information including names, telephone numbers, medical record numbers, and dates of birth of 49,572 Medicare beneficiaries. The Census Bureau reported 672 missing laptops, of which 246 contained some degree of personal data. Of the missing laptops containing personal information, almost half (104) were stolen, often from employees’ vehicles, and another 113 were not returned by former employees. The Commerce Department reported that employees had not been held accountable for not returning their laptops. The Department of State experienced a breach on its unclassified network, which daily processes about 750,000 e-mails and instant messages from more than 40,000 employees and contractors at 100 domestic and 260 overseas locations. The breach involved an e-mail containing what was thought to be an innocuous attachment. However, the e-mail contained code to exploit vulnerabilities in a well-known application for which no security patch existed. Because the vendor was unable to expedite testing and deploy a new patch, the department developed its own temporary fix to protect systems from being further exploited. In addition, the department sanitized the infected computers and servers, rebuilt them, changed all passwords, installed critical patches, and updated their anti- virus software. In August 2006, two circulation pumps at Unit 3 of the Tennessee Valley Authority’s Browns Ferry nuclear power plant failed, forcing the unit to be shut down manually. The failure of the pumps was traced to excessive traffic on the control system network, possibly caused by the failure of another control system device. Officials at the Department of Commerce’s Bureau of Industry and Security discovered a security breach in July 2006. In investigating this incident, officials were able to review firewall logs for an 8-month period prior to the initial detection of the incident, but were unable to clearly define the amount of time that perpetrators were inside its computers, or find any evidence to show that data was lost as a result. The Nuclear Regulatory Commission confirmed that in January 2003, the Microsoft SQL Server worm known as “Slammer” infected a private computer network at the idled Davis-Besse nuclear power plant in Oak Harbor, Ohio, disabling a safety monitoring system for nearly 5 hours. In addition, the plant’s process computer failed, and it took about 6 hours for it to become available again. When incidents occur, agencies are to notify the federal information security incident center—US-CERT. As shown in figure 7, the number of incidents reported by federal agencies to US-CERT has increased dramatically over the past 3 years, increasing from 3,634 incidents reported in fiscal year 2005 to 13,029 incidents in fiscal year 2007, (about a 259 percent increase). Incidents are categorized by US-CERT in the following manner: Unauthorized access: In this category, an individual gains logical or physical access without permission to a federal agency’s network, system, application, data, or other resource. Denial of service: An attack that successfully prevents or impairs the normal authorized functionality of networks, systems, or applications by exhausting resources. This activity includes being the victim or participating in a denial of service attack. Malicious code: Successful installation of malicious software (e.g., virus, worm, Trojan horse, or other code-based malicious entity) that infects an operating system or application. Agencies are not required to report malicious logic that has been successfully quarantined by antivirus software. Improper usage: A person violates acceptable computing use policies. Scans/probes/attempted access: This category includes any activity that seeks to access or identify a federal agency computer, open ports, protocols, service, or any combination of these for later exploit. This activity does not directly result in a compromise or denial of service. Investigation: Unconfirmed incidents that are potentially malicious or anomalous activity deemed by the reporting entity to warrant further review. As noted in figure 8, the three most prevalent types of incidents reported to US-CERT in fiscal year 2007 were unauthorized access, improper usage, and investigation. In prior reports, GAO and IGs have made hundreds of recommendations to agencies for actions necessary to resolve prior significant control deficiencies and information security program shortfalls. For example, we recommended agencies correct specific information security deficiencies related to user identification and authentication, authorization, boundary protections, cryptography, audit and monitoring and physical security. We have also recommended that agencies fully implement comprehensive, agencywide information security programs by correcting weaknesses in risk assessments, information security policies and procedures, security planning, security training, system tests and evaluations, and remedial actions. The effective implementation of these recommendations will strengthen the security posture at these agencies. In addition, recognizing the need for common solutions to improving security, OMB and certain federal agencies have continued or launched several governmentwide initiatives that are intended to enhance information security at federal agencies. These key initiatives are discussed below. The Information Systems Security Line of Business: The goal of this initiative is to improve the level of information systems security across government agencies and reduce costs by sharing common processes and functions for managing information systems security. Several agencies have been designated as service providers for IT security awareness training and FISMA reporting. Federal Desktop Core Configuration: This initiative directs agencies that have Windows XP deployed and plan to upgrade to Windows Vista operating systems to adopt the security configurations developed by NIST, DOD, and DHS. The goal of this initiative is to improve information security and reduce overall IT operating costs. SmartBUY: This program, led by GSA, is to support enterprise-level software management through the aggregate buying of commercial software governmentwide in an effort to achieve cost savings through volume discounts. The SmartBUY initiative was expanded to include commercial off-the-shelf encryption software and to permit all federal agencies to participate in the program. The initiative is to also include licenses for information assurance. Trusted Internet Connections initiative: This is an effort designed to optimize individual agency network services into a common solution for the federal government. The initiative is to facilitate the reduction of external connections, including Internet points of presence, to a target of fifty. In addition to these initiatives, OMB has issued several policy memorandums over the past two years to help agencies protect sensitive data. For example, it has sent memorandums to agencies to reemphasize their responsibilities under law and policy to (1) appropriately safeguard sensitive and personally identifiable information, (2) train employees on their responsibilities to protect sensitive information, and (3) report security incidents. In May 2007, OMB issued additional detailed guidelines to agencies on safeguarding against and responding to the breach of personally identifiable information, including developing and implementing a risk-based breach notification policy, reviewing and reducing current holdings of personal information, protecting federal information accessed remotely, and developing and implementing a policy outlining the rules of behavior, as well as identifying consequences and potential corrective actions for failure to follow these rules. Opportunities also exist to enhance policies and practices related to security control testing and evaluation, FISMA reporting, and the independent annual evaluations of agency information security programs required by FISMA. Clarify requirements for testing and evaluating security controls. Periodic testing and evaluation of information security controls is a critical element for ensuring that controls are properly designed, operating effectively, and achieving control objectives. FISMA requires that agency information security programs include the testing and evaluation of the effectiveness of information security policies, procedures, and practices, and that such tests be performed with a frequency depending on risk, but no less than annually. We previously reported that federal agencies had not adequately designed and effectively implemented policies for periodically testing and evaluating information security controls. Agency policies often did not include important elements for performing effective testing such as how to determine the frequency, depth, and breadth of testing according to risk. In addition, the methods and practices at six test case agencies were not adequate to ensure that assessments were consistent, of similar quality, or repeatable. For example, these agencies did not define the assessment methods to be used when evaluating security controls, did not test controls as prescribed, and did not include previously reported remedial actions or weaknesses in their test plans to ensure that they had been addressed. In addition, our audits of information security controls often identify weaknesses that agency or contractor personnel who tested the controls of the same systems did not identify. Clarifying or strengthening federal policies and requirements for determining the frequency, depth, and breadth of security controls according to risk could help agencies better assess the effectiveness of the controls protecting the information and systems supporting their programs, operations, and assets. Enhance FISMA reporting requirements. Periodic reporting of performance measures for FISMA requirements and related analyses provides valuable information on the status and progress of agency efforts to implement effective security management programs. In previous reports, we have recommended that OMB improve FISMA reporting by clarifying reporting instructions and requesting IGs to report on the quality of additional performance metrics. OMB has taken steps to enhance its reporting instructions. For example, OMB added questions regarding incident reporting and assessments of system inventory. However, the current metrics do not measure how effectively agencies are performing various activities. Current performance measures offer limited assurance of the quality of agency processes that implement key security policies, controls, and practices. For example, agencies are required to test and evaluate the effectiveness of the controls over their systems at least once a year and to report on the number of systems undergoing such tests. However, there is no measure of the quality of agencies’ test and evaluation processes. Similarly, OMB’s reporting instructions do not address the quality of other activities such as risk categorization, security awareness training, intrusion detection and prevention, or incident reporting. OMB has recognized the need for assurance of quality for certain agency processes. For example, it specifically requested that IGs evaluate the quality of their agency’s certification and accreditation process. OMB instructed IGs to rate their agency’s certification and accreditation process using the terms “excellent,” “good,” “satisfactory,” “poor,” or “failing.” For fiscal year 2007, OMB requested that IGs identify the aspect(s) of the certification and accreditation process they included or considered in rating the quality of their agency’s process. Examples OMB included were security plan, system impact level, system test and evaluation, security control testing, incident handling, security awareness training, and security configurations (including patch management). While this information is helpful and provides insight on the scope of the rating, IGs are not requested to comment on the quality of these items. Providing information on the quality of the security-related processes used to implement key control activities would further enhance the usefulness of the annually reported data for management and oversight purposes. As we have previously reported, OMB’s reporting guidance and performance measures did not include complete reporting on certain key FISMA-related activities. For example, FISMA requires each agency to include policies and procedures in its security program that ensure compliance with minimally acceptable system configuration requirements, as determined by the agency. In our report on patch management, we stated that maintaining up-to-date patches is key to complying with this requirement. As such, we recommended that OMB address patch management in its FISMA reporting instructions. OMB’s current reporting instructions only request that IGs comment on whether or not they considered patching as part of their agency’s certification and accreditation rating but nothing more. As a result, OMB and Congress lack information that could identify governmentwide issues regarding patch management. This information could prove useful in demonstrating whether or not agencies are taking appropriate steps for protecting their systems. Consider conducting FISMA-mandated annual independent evaluations in accordance with audit standards or a common approach and framework. We previously reported that the annual IG FISMA evaluations lacked a common approach and that the scope and methodology of the evaluations varied across agencies. Similar to our previous reports, we found that the IGs continue to lack a common methodology, or framework, which culminated in disparities in type of work conducted, scope, methodology, and content of the IGs’ annual independent evaluations. To illustrate: Of 24 agency IGs, seven reported performing audits that were in accordance with generally accepted government auditing standards and one cited compliance with the Quality Standards for Inspections, issued by the President’s Council on Integrity and Efficiency (PCIE). The remaining IGs did not indicate whether or not their evaluations were performed in accordance with professional standards. One IG indicated that the evaluation focused specifically on nonfinancial systems, while others cited work conducted for financial systems as part of their evaluations. In addition, multiple IGs indicated that their reviews were focused on selected components, whereas others did not make any reference to the scope or breadth of their work. According to their FISMA reports, certain IGs reported interviewing officials and reviewing agency documentation, such as security plans. In addition, certain IGs also conducted technical vulnerability assessments. In contrast, other IGs did not indicate their methods for evaluating controls. The content of the information reported by IGs varied. For example, several IGs only provided a completed OMB template, while others completed the OMB template and provided reports summarizing their evaluations. Content in these reports also differed in that several included comments on whether or not their agency was in compliance with laws and regulations. Several reports were comprised of a summary of relevant information security audits conducted during the fiscal year, while others included additional evaluations that addressed specific FISMA-required elements, such as risk assessments and remedial actions. Furthermore, some IGs issued recommendations to their agencies to improve the effectiveness of those agencies’ information security programs, while others did not indicate whether or not recommendations were issued. These inconsistencies could hamper the efforts of the collective IG community to perform their evaluations with optimal effectiveness and efficiency. Conducting the evaluations in accordance with generally accepted government auditing standards and/or a robust commonly used framework or methodology could provide improved effectiveness, increased efficiency, quality control, and consistency in assessing whether the agency has an effective information security program. IGs may be able to use the framework and methodology to be more efficient by focusing evaluative procedures on areas of higher risk and by following an integrated approach designed to gather sufficient, competent evidence efficiently. Having a documented methodology may also offer quality control by providing a standardized methodology, which can help the IG community obtain consistency of application. Last year we reported on efforts to develop such a framework. In September 2006, the PCIE developed a tool to assist the IG community with conducting its FISMA evaluations. The framework consists of program and system control areas that map directly to the control areas identified in NIST Special Publication 800-100 and NIST Special Publication 800-53, respectively. According to PCIE members, the framework includes broad recommendations rather than a specific methodology due to the varying levels of resources available to each agency IG. According to PCIE members, this framework is an effort to provide a common approach to completing the required evaluations, and PCIE has encouraged IGs to use it. In summary, agencies have reported progress in implementing control activities, but persistent weaknesses in agency information security controls threaten the confidentiality, integrity, and availability of federal information and information systems, as illustrated by the increasing number of reported security incidents. Opportunities exist to improve information security at federal agencies. OMB and certain federal agencies have initiated efforts that are intended to strengthen the protection of federal information and information systems. Opportunities also exist to enhance policies and practices related to security control testing and evaluation of information security performance metrics and independent evaluations. Until such opportunities are seized and fully exploited and the hundreds of GAO and IG recommendations to mitigate information security control deficiencies and implement agencywide information security programs are fully and effectively implemented, federal information and systems will remain at undue and unnecessary risk. Mr. Chairman, this concludes my statement. I would be happy to answer questions at this time. If you have any questions regarding this report, please contact Gregory C. Wilshusen, Director, Information Security Issues, at (202) 512-6244 or [email protected]. Other key contributors to this report include Nancy DeFranceso (Assistant Director), Larry Crosland, Neil Doherty, Rebecca LaPaze, Stephanie Lee, and Jayne Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Information security is especially important for federal agencies, where the public's trust is essential and poor information security can have devastating consequences. Since 1997, GAO has identified information security as a governmentwide high-risk issue in each of our biennial reports to Congress. Concerned by reports of significant weaknesses in federal computer systems, Congress passed the Federal Information Security Management Act (FISMA) of 2002, which permanently authorized and strengthened information security program, evaluation, and annual reporting requirements for federal agencies. GAO was asked to testify on the current state of federal information security and compliance with FISMA. This testimony summarizes (1) the status of agency performance of information security control activities as reported by major agencies and their inspectors general (IG), (2) the effectiveness of information security at federal agencies, and (3) opportunities to improve federal information security. In preparing for this testimony, GAO analyzed agency, IG, Office of Management and Budget (OMB), and GAO reports on information security and reviewed OMB FISMA reporting instructions, information technology security guidance, and information on reported security incidents. Over the past several years, 24 major federal agencies have consistently reported progress in performing information security control activities in their annual FISMA reports. For fiscal year 2007, the federal government continued to report improved information security performance relative to key performance metrics established by OMB. For example, an increasing percentage of systems governmentwide had been tested and evaluated, had tested contingency plans, and had been certified and accredited. However, IGs at several agencies sometimes disagreed with the agency reported information and identified weaknesses in the processes used to implement these and other security program activities. Despite agency reported progress, major federal agencies continue to experience significant information security control deficiencies that limit the effectiveness of their efforts to protect the confidentiality, integrity, and availability of their information and information systems. Most agencies did not implement controls to sufficiently prevent, limit, or detect access to computer networks, systems, or information. In addition, agencies did not always effectively manage the configuration of network devices to prevent unauthorized access and ensure system integrity, patch key servers and workstations in a timely manner, assign duties to different individuals or groups so that one individual did not control all aspects of a process or transaction, and maintain complete continuity of operations plans for key information systems. An underlying cause for these weaknesses is that agencies have not fully or effectively implemented agencywide information security programs. As a result, federal systems and information are at increased risk of unauthorized access to and disclosure, modification, or destruction of sensitive information, as well as inadvertent or deliberate disruption of system operations and services. Such risks are illustrated, in part, by an increasing number of security incidents experienced by federal agencies. Nevertheless, opportunities exist to bolster federal information security. Federal agencies could implement the hundreds of recommendations made by GAO and IGs to resolve prior significant control deficiencies and information security program shortfalls. In addition, OMB and other federal agencies have initiated several governmentwide initiatives that are intended to improve security over federal systems and information. For example, OMB has established an information systems security line of business to share common processes and functions for managing information systems security and directed agencies to adopt the security configurations developed by the National Institute of Standards and Technology and Departments of Defense and Homeland Security for certain Windows operating systems. Opportunities also exist to enhance policies and practices related to security control testing and evaluation, FISMA reporting, and the independent annual evaluations of agency information security programs required by FISMA. |
Widespread looting—including looting of radiological sources—became a major problem in Iraq after the March 2003 coalition forces invasion, complicating U.S. efforts to secure and collect radiological sources. Media reports of the looting at Iraq’s Tuwaitha Nuclear Research Center, for example, brought public attention to the scattering of radioactive materials throughout populated areas, posing health and safety risks to Iraqis. In May 2003, the IAEA, which had inventoried nuclear and radiological materials at Tuwaitha, raised concerns about Iraqi citizens’ exposure to radiation and publicly asked the United States to secure these materials. Given the extensive looting, DOD could not assume that facilities and items within them, including radiological sources, would remain intact or in place for later collection without being secured. Many facilities that were no longer under the control of Iraqis, such as abandoned government research facilities and industrial complexes, were looted. For example, a 2004 government report on the search for WMD stated that looters often destroyed sites after a coalition military unit moved through an area, since the coalition did not have the forces available to secure the various sites thought to be associated with WMD. According to one DTRA official, the looting was more extensive than he had ever seen before. The looting was reported to have included removing wiring and pipes from walls and from the ground; stealing desks, windows, sinks, and floors; and even dismantling and removing whole buildings. While some looting may have been done to thwart the U.S. mission, according to DTRA officials, most of it seemed to be related to selling or reusing common materials such as scrap metal rather than seeking radiological or nuclear materials. At the Tuwaitha facility, for example, looters dumped partially processed uranium ore from large containers onto the floor and took the containers. DOD found that fully securing sources from looters was challenging because of their persistence. According to a DTRA official’s personal assessment, no amount of forces could have controlled the rampant looting. At the Tuwaitha Nuclear Research Center, DOD concentrated security in those areas where radiological and nuclear materials were stored, but looters continued to penetrate the less secure areas of Tuwaitha, a large complex of over 90 buildings. The scattering of radiological sources by looters complicated the later collection of those sources. In one dramatic instance, looters stole large cobalt sources from an Iraqi radiological test site in early September 2003, when U.S. troops were guarding the site. The large, open site, which was apparently designed for carrying out radiation exposure experiments in the surrounding areas, contained eight metal pillars, each with a pulley system to raise a cobalt source from a concrete storage pit to the pillar’s top. Looters tore down and removed three of these pillars and also took the cobalt sources from two of them. (See fig. 1.) After several days of extensive searches in the area, DTRA recovered both stolen sources. According to a DTRA official, the metal pillars were probably the looters’ intended target, and the sources may have been taken unintentionally when they became caught in the pulley mechanisms. For about the first 6 months after the war began in March 2003, military commanders had insufficient guidance and equipment appropriate for collecting and securing radiological sources that they discovered. As a result, they were forced to make ad hoc decisions about recovering and securing these sources. During this time, DTRA—the agency DOD had assigned to the WMD elimination mission 12 days before the war began—was working to fill gaps in preparations for the mission to collect and secure radiological sources. It was not until September 2003 that DTRA finalized the terms of the contract for collecting the radiological sources and collections began throughout Iraq. Military commanders in Iraq initially had no policy guidance on which radiological sources to collect, and what to do with them once they were collected. DOD did have some specialized teams with radiological expertise, such as the 11-person Nuclear Disablement Team, which had been set up to disable WMD and associated production facilities in Iraq. This team had the expertise to move radiological sources, including packaging radioactive material and designing safety procedures to minimize radiation exposure. However, military commanders lacked sufficient equipment appropriate for safely collecting and moving radiological sources. Without adequate official guidance and equipment to handle the radiological sources they encountered in Iraq, military commanders were left to make ad hoc decisions about recovering and securing the sources. They acted because they were concerned about the inherent health and safety risks of radiological sources to coalition soldiers and the Iraqi populace, as well as the potential for enemy or terrorist forces to use the sources to construct dirty bombs. For example, lacking the proper radiation shielding equipment, the Nuclear Disablement Team moved a radiological source to Tuwaitha with improvised shielding because an officer judged that the unshielded source posed the risk of radiation exposure to Iraqis working in the vicinity. The team created what was described as “field expedient” packaging by lining an ice chest with lead bricks that were brought from the Tuwaitha Nuclear Research Center. However, the container did not sufficiently shield the driver of the military vehicle carrying the source from radiation exposure. Therefore, the team further improvised shielding by placing metal sheets salvaged at the site between the driver and the container in the back of the vehicle. This additional shielding reduced the radiation at the driver’s seat to a level that just met the team’s safety standard for exposure. However, the radiation in the back of the vehicle still exceeded that standard. Consequently, a second military vehicle followed the loaded vehicle at a safe distance to prevent occupants of any other vehicles from following so closely that they would be exposed to unsafe levels of radiation. On the basis of his assessment of the team’s experience with moving the source described above, the commander of the Nuclear Disablement Team decided it was too risky to allow his troops to move any more sources without proper handling equipment and containers. Because some military officers were reluctant to move radiological sources to a single consolidation site without adequate handling and packaging equipment or official guidance, coalition forces had their troops guarding sources around Iraq. In some cases this posed health risks—for example, some sources were secured in bases where U.S. troops were already stationed, creating the need to protect the troops from accidental exposure to radiation. When sources were secured outside controlled areas, however, security risks resulted. For example, according to a DTRA official, field commanders complained to him after he arrived in July 2003 that protecting radiological sources in some field locations exposed their troops to increased risks of attacks. Estimates of how many soldiers were removed from their military duties to guard sources were not available, but we were told of instances in which troops were left guarding sources for several months. According to a DOE expert involved in DTRA’s later collections, for example, a small group of troops had guarded sources at an oil drilling operation from May until early September 2003. Between March and September 2003, as individual military commanders acted independently to collect or secure radiological sources when they discovered them, DTRA was working to fill gaps in preparations for the mission to collect and secure radiological sources. According to DTRA officials, they only gradually became concentrated on radiological sources as their initial focus on eliminating WMD diminished because stockpiles of chemical, biological, and nuclear weapons were not found. First, DTRA tried to establish much-needed guidance on which radiological sources to collect and where to consolidate them. According to a DTRA official, these and other issues had been discussed in prewar planning in late 2002, but guidance had not been issued. In July 2003, the DOD Office of Policy issued guidance on collecting and securing radiological sources for field commanders, which a DTRA official told us was all the policy guidance that DTRA needed. However, DTRA still needed to specify standards for health and safety as well as for transportation for its collection missions. According to the DTRA commander who set up collection operations in Iraq, DTRA used U.S. standards to ensure safety, but these standards were modified for the Iraq situation. For example, instead of using radioactive cargo placards on vehicles, which would be required by U.S. standards but might attract an insurgent attack, DTRA notified local military commanders along the route of its cargo when moving sources. In addition, DTRA engaged in extensive, and ultimately unsuccessful, coordination within DOD to provide protection for its contractor at the Tuwaitha storage site through a contracted security force, but eventually obtained protection for its collection mission through coalition forces headquarters. This security force stood by for deployment to Iraq while the Department of Defense General Counsel, DOD’s Central Command, and coalition military headquarters considered DTRA’s request to arm this force. When this request was denied, DTRA decided in late 2003 that sufficient protection could be provided by military forces. For each collection mission, DTRA coordinated protection through the coalition forces headquarters, and could draw upon a military police platoon for a security escort. Also, starting in March 2003, DTRA worked to coordinate arrangements with DOE for its assistance with collecting radiological sources. DOE was to send both technical experts from one of its national laboratories and shipping containers to Iraq for the collection effort. However, the arrangements were complicated by DOE’s concerns about potential disposal of collected sources at its U.S. facilities and about the safety of DOE experts working in Iraq, as well as by communication difficulties. DOE had concerns about potential lawsuits arising from disposing of sources at its U.S. facilities. A DOE official told us that mislabeled or improperly packaged containers could lead to lawsuits if, for example, a source in a container was mislabeled and turned out to be a source that DOE’s U.S. site was not licensed to possess, or if poor packaging led to radiation leakage in the United States. Consequently, DOE insisted that its technical experts be present when the sources were collected to identify and package them in Iraq, before they were transported to DOE’s U.S. facilities, and DTRA agreed. When collections began, however, the danger of packaging sources in a hostile environment led DTRA to instead use temporary packaging in the field, followed by interim packaging at the Tuwaitha facility. The final packaging of the sources did not occur until May 2004 when DOE experts packaged them for shipment to the United States. DOE also had concerns about the safety of its experts while overseeing the packaging of the sources in Iraq. Consequently, DOE proposed a contract provision that required DTRA to make every reasonable effort to evacuate DOE experts to a safe area if hostilities broke out. DTRA initially said it could not accept this contract provision because it did not control the troops who could provide such protection. Eventually the contract said that the DOE experts would not be exposed to unreasonable risks, but, according to a DOE official, the discussion about a military protection clause held up the contract for a couple of weeks. Unclear communications also affected the negotiations between DTRA and DOE. For example, according to a DOE official, at one meeting DTRA told DOE that DTRA either had shipping containers or could get them. But a few weeks later, DTRA asked DOE to provide the containers. Then communication about the number of containers needed became an issue because DTRA could not know the number or type of radiological sources that would need to be transported. Finally, the DOE expert preparing a contract proposal had difficulty defining the scope of services to be provided to DTRA because DTRA’s plan was not clear to him. For example, he was not initially aware that the DOE experts would have only an oversight role and that DTRA was planning to use a contractor to do the collection work. In addition, between March and September 2003, DTRA was also negotiating with its contractor to collect sources. This process was delayed in large part by the contractor’s refusal to begin work until it obtained protection from legal claims for damages that could result from their work—that is, until they were given indemnification. Resolving this legal indemnification issue was delayed, in part, because DTRA contracting officials, who were uncertain about the infrequently used procedures for granting indemnification for work done under potentially hostile conditions, asked the contractor to provide what turned out to be unnecessary detail on the various damage scenarios that indemnification would cover. For example, one concern was that a convoy truck loaded with radiological sources would be fired upon, resulting in the radiological contamination of the area. In the end, DTRA decided that the indemnification language would be general and provided the contractor with indemnification in September 2003. Getting DOE experts working in Iraq was also delayed by indemnification issues, but their indemnification was settled earlier. The contractor’s acquisition of equipment, such as helmets and body armor, was also delayed, although not as long as the indemnification. The State Department approves the export of such U.S.-origin defense products to other countries under the International Traffic in Arms Regulations; approval took over 50 days in the case of one request by the DTRA contractor. According to a State official, this delay occurred despite procedures to expedite approval of export applications for Operation Iraqi Freedom because this particular approval required congressional notification, a requirement State could not meet until Congress returned to session. As a result of these delays, according to a DTRA official, DTRA’s contractor wore helmets obtained from other countries because the helmets could be obtained sooner. In addition, the contractor, which was responsible for obtaining all needed equipment for the collection mission, initially lacked some equipment. According to a DTRA official, in one instance, the contractor did not allow its workers to perform a mission because of concerns that heat at the work site exceeded safety standards even though the contractor lacked the monitoring equipment to make that determination. According to the contractor’s project manager, some necessary items were forgotten because the contractor team, which was being created for the first time, did not have an established standard equipment list for this mission. Finally, DTRA’s efforts to subcontract with Iraqis to help with collections also took time. In July 2003, because of security concerns, DOD’s Office of Policy stopped Iraqis from the former Iraqi Atomic Energy Commission from independently collecting sources and rescinded their access to the secured bunker at Tuwaitha. By October 2003, DOD had decided to authorize, and encourage the use of, experienced Iraqis to locate sources, leave them secured in place when possible, and move unsecured sources to Tuwaitha, but this was an unsuccessful strategy for quickly increasing collection efforts. According to a DTRA official, DTRA tried unsuccessfully to get Iraq’s Coalition Provisional Authority to fund Iraqis from the Ministry of Science and Technology to collect sources, but restrictions on the Coalition Provisional Authority’s funds did not allow this. Eventually, DTRA arranged for its contractor that was collecting sources to subcontract some tasks to these Iraqis, but it took time to work out hiring, training, and procedures. For example, DTRA told us that subcontracting with the Iraqis was challenging because of difficulties with establishing banking procedures to ensure they got paid. By the time procedures were developed, training was finished, and the Iraqis began collection missions, it was February 2004, and DTRA’s collection mission was in its final months. Between September 2003 and May 2004, DTRA collected and secured about 1,400 radiological sources from sites throughout Iraq and left in place another 700 that it deemed secure. To further secure the most dangerous sources it had collected, in June 2004, DTRA and DOE together removed about 1,000 of the 1,400 previously collected sources from Iraq. Despite DTRA’s efforts, however, the total number of radiological sources in Iraq remains unknown. During approximately 140 collection missions conducted between September 2003 and May 2004, DTRA and its contractor collected about 1,400 unsecured radiological sources and inventoried and left in place about 700 sources that DTRA deemed secure. To collect the 1,400 sources, DTRA identified their locations, traveled to those locations and found the sources, determined which sources to remove, transported those selected for removal to Tuwaitha, and secured them in a bunker there. According to DTRA officials, the collection missions were conducted safely, despite increasing insurgent hostilities and exposure risks associated with handling radioactive material. About 450 of the 1,400 sources ultimately collected were removed from radioactive lightning arrestors. Unlike conventional lightning arrestors, radioactive ones use radiological sources to enhance the attraction of lightning. One or more sources sat in a metal cylinder at the top of each of the metal arrestor poles. Iraq had located these arrestors around its munitions dumps, military bases, and industrial complexes to protect them from lightning strikes. If these facilities were abandoned, the lightning arrestors—including the radiological sources—would have been easily accessible to looters. Coalition forces also found sources used in commercial activities, such as oil exploration, agriculture, and scientific research. The uses of many other unsecured sources DTRA collected were unknown. As figure 2 shows, DTRA collected unsecured radiological sources from locations across Iraq, from the north at the Turkish border to the south near Al Basrah. However, many of the sources were collected at the Tuwaitha Nuclear Research Center, located about 25 miles from DTRA’s base camp near Baghdad International Airport. Upon arrival at locations, the radiological sources were sometimes not where DTRA and its contractor expected to find them. For example, on one mission, a radiological source from a lightning arrestor was found outside its metal cylinder under about 2 inches of debris. A DTRA official told us that looters apparently valued the metal lightning arrestor poles and copper wire inside them more than the radiological sources. At other times, DTRA and its contractor did not find the expected sources at all, which the contractor’s mission reports sometimes attributed to faulty intelligence or looting. If the radiological sources DTRA found were at an abandoned site or otherwise not under legitimate control of the Iraqis, DTRA collected them. For example, DTRA collected two large cesium sources from a factory that was largely abandoned. Similarly, if a lightning arrestor was damaged and the radiological source potentially subject to looting, DTRA would collect the source, according to a DTRA commander. After collecting and packaging the radiological sources, DTRA secured them by transporting them to a protected bunker at Tuwaitha. According to DTRA officials, DTRA had found a bunker at Tuwaitha that had blast-proof doors. DTRA further improved the bunker’s security, investing over $1 million in improvements such as a chain link fence, gate, and security system. In addition, DTRA placed an armored unit outside the bunker to guard it. Figure 3 shows the protected bunker, under a mound of earth at the Tuwaitha Nuclear Research Center. In addition to the about 1,400 radiological sources DTRA collected during its mission, DTRA left about 700 sources or source devices in place after it determined that they were properly secured and in the custody of responsible personnel. According to DOD’s guidance, coalition forces and DTRA could leave sources in place if they had medical, agricultural, industrial, or other peaceful uses; were properly contained and adequately secured; and were in the custody of trained personnel acting in a professional capacity, such as hospital staff or agricultural ministry personnel. DTRA relied on this guidance to determine whether radiological sources it found could be left in place. In line with the guidance, when DTRA left sources in place, it recorded information such as location, use, and responsible institution or individual. Although the guidance did not elaborate on the standard for adequate security, a DTRA commander told us that the guidance was sufficient for DTRA to decide which sources were secure enough to be left in place. DTRA’s initial planning had assumed that the war would be over when its contractor went to work and, therefore, it would be collecting sources in a peaceful environment. Instead, with insurgent attacks continuing after major combat operations were declared over, the contractor’s staff was consistently exposed to danger. In fact, insurgent attacks throughout Iraq significantly increased during the collection period and generally became more sophisticated, widespread, and effective (see fig. 4). Although some areas were known as particularly dangerous for travel, attacks were unpredictable and occurred in many places. For example, according to a DTRA commander, during the first day of a mission in the Sunni triangle, the DTRA team came under mortar and sniper attack; during the second day, a helicopter involved in the mission experienced a rocket-propelled grenade attack. On another occasion, a DTRA convoy traveling through Baghdad was delayed by an explosion that left a burning vehicle in the road. Even within the relative security of the Tuwaitha Nuclear Research Center, DTRA’s contractor reported hearing shots fired and found an improvised bomb on the road. To help decrease the danger, DTRA planned armed security for each of its missions. DTRA officers told us they assessed the potential danger associated with a particular mission and, if the anticipated security risk was higher than usual, they increased the size of the security force. For example, the number of vehicles with mounted weapons might be increased from two to four. When the risks seemed particularly high, missions were at times postponed. DTRA’s security plan also specified the route of the convoy, so its location could be tracked with a communication system and a quick-response military team could be sent if needed. In addition, military troops sometimes secured the area around the source before the arrival of DTRA’s contractor staff. Despite the attacks and the risk of exposure to radiation when collecting radiological sources, DTRA officials reported that the agency’s missions to collect and secure radiological sources from September 2003 to May 2004 were conducted safely. According to DTRA officials, although the risks from hostilities were often greater than the risks from handling the radiological sources, DTRA’s team did not sustain casualties during its collection missions. However, two contractor staff were injured—one seriously—in a mortar attack at DTRA’s home base near Baghdad International Airport, but not during a collection mission. With regard to radiation exposure, the contractor’s plan called for keeping the effect of individual exposures on a person as low as reasonably achievable and cumulative exposures over the mission below specified limits. Although six team members’ hands or feet were contaminated with radioactive powder in one instance, according to DTRA and contractor officials, DTRA personnel and contractor staff remained under the cumulative standard throughout the overall mission. In March 2004, a National Security Council interagency policy committee that included DOD and DOE made the final decision to remove the most dangerous radiological sources from Iraq before the Coalition Provisional Authority handed power over to the interim Iraqi government at the end of June 2004. In the case of Iraq, DOE selected radiological sources for removal based on its criteria for determining which radioactive material posed a significant risk as dirty bomb material. Normally, DOE applies its criteria to individual sources in determining the risk. In this case, DOE consolidated some of these sources that, individually would not have met the risk criteria, but did meet the criteria once they were consolidated into waste shipment containers. According to a DOE official, using the criteria this way was warranted because the consolidation of the sources in the storage bunker created a potential public health risk or a target for theft, and Iraq had ongoing hostilities. As a result of applying its criteria in this way, DOE removed from Iraq about 1,000 of the 1,400 collected sources, accounting for a total of almost 2,000 curies, or over 99 percent of the radioactivity of the collected sources. The remaining radiological sources were generally small, accounting for a few curies of radioactivity in total. After the National Security Council approved the removal mission in March 2004, final preparations for the mission were completed in about 2-1/2 months and the mission was finished in about 1 month. In late May 2004, DOE sent a team of 20 experts to Iraq to identify the type and radioactive strength of each collected source and package the sources for shipment to the United States. Given the escalating hostilities, DTRA hired a contractor to create a protected living area for the DOE team at the Tuwaitha site to reduce the exposure to attacks that would have resulted from traveling daily from a base camp to work at Tuwaitha. Figure 5 shows this living area and the concrete barriers placed at the perimeter. DOE had difficulties coordinating with DTRA to get all the information needed to determine the number and types of shipping containers for the source recovery mission. DTRA constructed its inventory information on radiological sources collected at the Tuwaitha bunker to try to meet DOE’s needs. However, DOE experts told us DTRA’s information never fully met DOE’s expectations. Specifically, DOE wanted comprehensive information on the type of isotope and radioactivity of the sources to determine the number and types of containers needed to safely ship the sources to the United States, as well as to do other planning tasks, such as an environmental impact assessment. According to DOE experts, DTRA could never provide, for example, complete and accurate information on radioactivity. Deciding that full information would not be forthcoming, the DOE experts overestimated radioactivity to ensure that DOE would bring enough containers from the United States to ship the radiological sources back safely. Ultimately, DTRA and DOE were able to complete the task of analyzing, packaging, and loading the containers into trucks in about 25 days. DTRA and DOE successfully removed about 1,000 radiological sources and about 1.7 metric tons of low-enriched uranium from Iraq on June 23, 2004, 5 days before the transfer of power from the Coalition Provisional Authority to the interim Iraqi government. DTRA and DOE transported the sources in a heavily guarded convoy to a military airfield, and then departed from Iraq by military air transport. These materials were taken to a DOE site within the United States and are being evaluated for either reuse or permanent disposal. The disposal activities, funded by both DTRA and DOE at an estimated $4.2 million, are expected to continue through late fiscal year 2006. According to DOE officials, the final disposition of the radiological materials removed from Iraq may take longer and cost more than estimated because a legal determination is needed regarding whether the United States government owns the material or is merely serving as its custodian. Currently, DOE is storing the sources temporarily at one of its sites, but it is waiting for an interagency determination before deciding on how to dispose of the material. According to DOE officials, they raised this issue of ownership when the removal mission was being planned, but it was never resolved. As of mid-April 2005, DOE was prepared to start shipping sources to disposal facilities, but DOE disposal facilities are unwilling to take possession of the sources until ownership has been determined. Thus, DOE will hold the sources in temporary storage longer than anticipated, leading to increased storage costs. Although DTRA’s effort to collect unsecured sources and leave secured sources in place identified about 2,100 radiological sources in Iraq, it is likely that other sources remain unsecured in Iraq for three reasons. First, the number and location of all sources in Iraq before the war were not known. Second, DOD did not search in all places in Iraq where sources might be found. Third, since the end of DTRA’s mission in June 2004, other unsecured sources have been found, including at Iraq’s borders. The number of sources in Iraq prior to Operation Iraqi Freedom was not precisely known because the former government of Iraq did not maintain an inventory of radiological sources around the country. Around the time that major combat operations were declared over in May 2003, DOD received information on radiological sources in Iraq, but DOD and State officials told us that this information was not reliable for the purpose of locating and securing sources. For instance, DTRA officials told us that the information on sources and their locations was not precise because the names of locations were not clear, some sources were reported twice at the same location, and the information was sometimes outdated. However, DTRA used this information as a general guide to where sources might be found. Lacking more reliable information about the number and location of sources in Iraq at the beginning of the war, DTRA first collected sources discovered by coalition forces and then searched for other sources. Because DOD and DTRA did not search all locations where radiological sources might be found, it is likely that unknown sources remain unsecured in Iraq. One DTRA official told us that DTRA was not tasked to search all locations where sources might be found. In addition, DTRA found evidence that sources had been taken from some locations before DTRA arrived. According to State officials, neighboring countries detected elevated radiation readings from cargo on trucks leaving Iraq starting at least by September 2003, and some of these trucks were turned back at the border. Although many of these incidents involved radioactively contaminated scrap metal, some cargo included sources. State officials said they did not know where the trucks and their cargo went after returning to Iraq, but the State Department sought to improve coordination with neighboring countries to manage these border incidents. Because of the lack of a complete search for sources in Iraq, officials of the interim Iraqi government told us that it intended to perform a more comprehensive search. Finally, sources continued to be found in Iraq and at its border after DTRA completed its collection and removal mission in June 2004. In addition, according to State officials, radioactive materials, primarily contaminated scrap metal but also some sources, continued to be detected on trucks leaving Iraq after that time. Separately, in August and September 2004, for example, a country bordering Iraq found radioactive sources on trucks leaving Iraq. Also, a U.S. Army officer responsible for nuclear, biological, chemical, and radiological issues in Iraq told us that, in at least one case, an unsecured source or sources from lightning arrestors had been discovered by U.S. troops since the end of DTRA’s mission in Iraq. The Department of State supported the Coalition Provisional Authority in creating an independent Iraqi agency, the Iraqi Radiological Source Regulatory Authority (IRSRA), to regulate sources, and State and DOE are assisting the new agency by providing equipment, technical assistance, and funding. However, the evolving Iraqi government—including the transitional government formed after the January 2005 election and the permanent government to be formed through an upcoming election—and the ongoing insurgency are creating uncertainties for both IRSRA and U.S. assistance. Before the transition to the interim Iraqi government in June 2004, State’s Bureau of Nonproliferation encouraged the creation of IRSRA. It saw this effort as an extension of U.S. support for international standards for safe and secure management of radiological sources, such as those coordinated and administered by IAEA. Specifically, IRSRA will further several U.S. foreign policy goals. First, an Iraqi agency that controls radiological materials will promote the health and safety of Iraqis, as well as provide the capability for Iraq to meet international commitments for the safe and secure management of radiological sources. Second, an effective Iraqi agency for regulating sources will promote U.S. national security goals by decreasing the likelihood of terrorists trafficking in or deliberately releasing radioactive material. Third, the new agency will employ former Iraqi scientists who might otherwise seek employment with terrorists or countries seeking WMD expertise. State officials enlisted Iraqi officials within the Coalition Provisional Authority to support the formation of IRSRA. In particular, State negotiated with the Minister of the Ministry of Science and Technology (MOST), who played a leading part in supporting the creation of IRSRA. The Minister agreed to allow IRSRA to regulate Iraq’s radiological sources, while MOST will retain ownership and control of secured nuclear and radiological materials at research facilities. The Minister also agreed to continue DTRA’s efforts to find and collect unsecured radioactive sources, but under contract with IRSRA. The Minister further agreed that IRSRA would be legally and financially independent—a key element in State’s plan for IRSRA. According to State officials, IRSRA was designed as an independent agency to avoid conflicts of interest. While Iraqi ministries, such as the Ministry of Health, the Ministry of Oil, and MOST, own or track many of the radiological sources in Iraq, their activities will be subject to the regulation of IRSRA, which will inspect, inventory, and regulate all sources in Iraq. In addition, through discussions with Iraqi and Coalition Provisional Authority officials, State helped draft the 2004 budget plan and the organizational structure of IRSRA. The plan included providing $7.5 million to the new agency within the Iraqi Government Budget developed by the Coalition Provisional Authority for fiscal year 2004. These funds are to be spent on salaries, the search for sources, assistance from U.S. experts, office space, and facility security. State’s organizational plans for IRSRA identified the departments and staffing needed to accomplish agency tasks, such as regulating radiological sources in use, managing unwanted radiological sources, and creating regulations in cooperation with IAEA and other experts. In addition, to further State’s efforts, DTRA trained Iraqis to collect, store, and secure radiological sources during its own collection operations and subsequently provided Iraqis with an upgraded secure storage facility and its inventories of sources removed from the country, left at the facility, or identified around Iraq. In June 2004, the Coalition Provisional Authority issued an order establishing IRSRA. According to the order, IRSRA will promulgate and enforce regulations to allow for beneficial uses of radioactive sources, provide for adequate protection of humans against the harmful effects of radiation, and ensure the safety and security of radiological sources. For example, it will require hospitals, universities, oil production facilities, and others to obtain licenses to possess radiological sources, which will enable the agency to maintain records on radiological sources in the country. Licensees will be obliged to follow procedures and regulations that define how they will secure, inventory, and work with their licensed radiological sources. In addition, IRSRA is responsible for collecting unsecured sources when they are found, creating radiation health and safety criteria, and researching the possibility of constructing a low-level radioactive waste disposal facility in Iraq. The Coalition Provisional Authority disbanded shortly after it created IRSRA, but its order will continue to have legal authority in Iraq until it is amended or changed by the Iraqi government, according to State officials. By the summer of 2005, State officials told us, they perceived signs that IRSRA was beginning to function and was becoming more established as part of the Iraqi government. For example, IRSRA had started drafting regulations and was requiring ministries to notify it about their radiological sources. Moreover, it had an appointed chairman, developed a budget, and obtained its own building and office space, as well as about 50 staff. In addition, State and DOE are assisting IRSRA by providing equipment, facilitating technical assistance, and providing funding. First, to help the Iraqis collect unsecured sources under the direction of IRSRA, State has initiated an effort to transfer to Iraqi agencies equipment that had been purchased by DTRA to collect sources. This equipment includes radiological handling, measurement, and protective equipment, such as radiation meters, respirators, and protective clothing. According to State officials, preparations for the transfer of this equipment began in mid-2004; as of early 2005, State and DOD were discussing how this equipment would be transferred to the Iraqis. In the meantime, this equipment has been made available to MOST for collecting radiological materials. State is also facilitating technical assistance. With funding and logistical support from DOE, State coordinated several meetings in Amman, Jordan, in December 2004 to provide IRSRA personnel training by IAEA staff and to help them draft an action plan for regulating radiological sources. IRSRA’s action plan is based on the IAEA Model Project program, through which IAEA is helping about 100 developing countries establish effective regulatory controls for radioactive sources. Under the Model Project program, developing countries adopt action plans to help them establish or strengthen radiation protection infrastructures in order to meet international standards and to follow the guidance in the IAEA Code of Conduct on the Safety and Security of Radioactive Sources. Under the action plan, which was finalized in March 2005 meetings in Washington, D.C., IRSRA will establish a regulatory framework; work to control radiation exposure in occupational, medical, and public settings; and set up emergency preparedness and response capabilities. IAEA plans to provide expert assistance to help IRSRA meet these goals. In addition, to help IRSRA find unsecured sources, IAEA will offer radiation detection equipment and training in border control. To complement the action plan, IAEA is sharing with IRSRA a computer program designed to track information about radiological sources’ locations, radioactive strengths, licensing, and responsible parties. IRSRA intends to use this program to manage information it gathers on Iraqi radiological sources. In addition, in coordination with IRSRA’s action plan, DOE is offering IRSRA technical assistance to help ensure the security of radiological sources. For example, DOE plans to provide experts to review draft Iraqi laws and regulations for their relevance to security. DOE also plans to assist IRSRA with facility upgrades to address security vulnerabilities of sources used for medical, industrial, or other peaceful purposes. Moreover, in conjunction with IAEA, DOE may also offer field equipment and training workshops for inspecting the security of sources. Finally, to financially support IRSRA’s action plan, State intends to use $1.25 million from its Nonproliferation and Disarmament Fund, which provides funding for projects to prevent the spread of WMD. State plans to provide part of these funds to IAEA for training and other assistance to IRSRA, including an IAEA review of Iraq’s draft laws and regulations. State plans to also use the funds to purchase a specially equipped vehicle that can be driven through neighborhoods to detect unsecured radiological sources. In addition, State plans to hire a contractor to coordinate security matters with coalition forces to minimize the risk of attacks, while the Iraqis are working to control sources. According to State officials, because of uncertainties associated with the continuing formation of the Iraqi government, State will have to monitor Iraqi efforts to ensure the continued growth and success of an independent, competent, and sustainable regulatory authority for the control of radioactive sources and materials. According to these officials, the ongoing formation of the Iraqi government could affect the future of IRSRA in several ways. First, potential changes to the government’s organization or personnel could affect IRSRA’s funding and enforcement powers. For example, the transitional government formed from the January 2005 election chose new government ministers—including replacing the Minister of Science and Technology, who had aided the formation of IRSRA. In addition, according to State and Iraqi officials, in early 2005, the Iraqi government froze all new expenditures until the transitional government takes action on the budget. Therefore, the funds for the IRSRA contract with the ministry to search and recover sources were not available. However, State officials told us the collection missions are important for public safety and would go forward in anticipation of later payment. Finally, the Iraqi government will have to enact the laws and regulations that IRSRA will be drafting under its action plan. In addition, State officials told us that the evolving relationship of the northern Kurdish-controlled territories with the rest of Iraq could affect IRSRA’s operation. Before Operation Iraqi Freedom, the Kurds enjoyed some independence from the former Iraqi regime, and State officials told us that this partial independence has continued. IRSRA and Kurdish officials will be discussing whether and how IRSRA will operate in Kurdish-controlled territory. According to the Chairman of IRSRA, Kurdish officials are likely to accept a proposal to create a branch office of IRSRA in Kurdish territory. This proposed office would be staffed by Kurds, but IRSRA would provide equipment, training, and protocols. Finally, the continuing insurgency is hindering IRSRA’s ability to find and collect unsecured radiological sources as well as the ability of the United States to provide assistance. Iraqi and State officials are concerned that insurgents will target Iraqis who are seen associating with coalition forces on their official duties. For example, a MOST official told us that Iraqi workers entering a U.S. military base to collect sources would likely be ambushed by insurgents upon leaving the military base. The hostile environment also impairs the ability of the United States to provide certain kinds of assistance. For example, DOE has decided not to send its experts into Iraq because of the ongoing hostilities, according to a DOE official. However, State and DOE are devising ways to assist without going to Iraq, such as organizing training for Iraqis at sites outside of the country. Although DOD has assessed its overall WMD mission in Iraq, the agency has not assessed its narrower mission to collect and secure radiological sources. In contrast, DOE has considered actions to address specific lessons learned from its experience in removing radiological sources from Iraq. DOD asked its National Defense University (NDU) to study DOD’s overall mission to find and eliminate WMD in Iraq, determine what lessons could be learned from it, and recommend improvements. The resulting report stated that DOD had not sufficiently planned and prepared for the WMD mission; had shortfalls in the needed transportation, military security, and logistics resources; and had operational difficulties arising from the extensive looting, public disorder, and hostile security environment. The report recommended that DOD develop the capability to quickly eliminate WMD in hostile environments and establish a permanent organization for eliminating WMD. (See app. II for more information on the report.) DOD is responding to the report, in part, by seeking stronger planning and capacity for eliminating WMD, which a DOD Joint Staff officer told us would include the elimination of radiological materials. Specifically, DOD’s Strategic Command, which was assigned responsibility for this planning in January 2005 by the Secretary of Defense, will first determine the needed capacities. The NDU report did not, however, offer any observations or recommendations regarding the narrower mission to collect and secure radiological sources in Iraq, in part because this was not the main focus of the original WMD mission in Iraq. Nevertheless, the author of the NDU report and a DOD Joint Staff officer told us that DOD’s efforts to solve overarching issues with its preparation for eliminating WMD will also address problems experienced with the mission to collect and dispose of radiological sources. DOE asked its contractor at one of its national laboratories to analyze the removal mission to identify lessons learned and recommend improvements. The resulting analysis highlights the lessons that timing of funds and availability of equipment hindered rapid preparation for the mission. First, the contractor noted that the short amount of time between when the project was funded and when the team left for Iraq meant that almost every preparation task had to be conducted in emergency mode. DTRA funding became available in March 2004 after the National Security Council approved the mission, leaving less than 2-1/2 months for the team of DOE experts to complete all preparations in the United States. Needed preparations included establishing a liaison with DTRA in Iraq; determining the list of sources to be removed based on DTRA’s inventory; developing safety and handling procedures for those specific sources; completing safety assessments for those procedures; determining the need for, and obtaining, a National Security Exemption to bring some of the radioactive sources to the United States; recruiting the remainder of the team members; cross training team members to be able to complete another member’s work if necessary; getting the DOD training and authority necessary for the team to enter Iraq; obtaining contractor indemnification for the mission; preparing a U.S. staging facility for equipment; and procuring, testing, and packaging such equipment as protective clothing, tents, and communication equipment. In addition, according to the contractor, preparation for the mission was almost critically delayed by difficulties in acquiring containers for transporting the radiological sources. DOE and its laboratories did not have a sufficient number and variety to meet the projected needs of the removal mission—a shortfall that proved challenging to overcome in time to successfully conduct the mission. Specifically, certain special containers could not be procured in time from U.S. domestic suppliers as a result of shortages. Consequently, DOE arranged to lease four of these special containers from a foreign company by agreeing to provide the company blanket indemnity with up to approximately $1 billion in liability coverage in case of an accident involving the containers. The containers arrived a few days before the team and its equipment were to leave for Iraq. According to the contractor, if DOE’s negotiations to get the special containers had failed, the removal mission would have been delayed, and it is likely that many radiological sources with high radiation levels would not have been able to be removed. To support timely action in future removal operations, the contractor recommended that DOE seek ways to ensure the existence of advanced funding and maintain a small fleet of versatile containers. DOE officials told us they saw merit in having a way to quickly fund future missions, although their agency’s funding—used solely for the disposal rather than the removal of the sources—was available early enough in the case of Iraq. With regard to maintaining a reserve of containers and other equipment, the officials solicited proposals and cost estimates from their national laboratories and have determined they cannot pursue this option given current budget constraints. Because DOD has not comprehensively reviewed its experiences in collecting and securing radiological sources in Iraq, its current efforts to improve its preparations to secure or destroy WMD in future missions will not benefit from important lessons learned from its radiological source mission. Reviewing such experiences and identifying lessons learned would help prepare for any future missions involving similar circumstances. In addition, DOD’s lack of readiness to quickly collect and secure sources after the war began indicates that additional planning and preparation could have been completed in advance of the mission. Specifically, DOD had not planned to collect sources in a hostile environment and thus had to act during the operation to integrate the objective of collecting and securing sources with military combat objectives; established criteria to determine which radiological sources needed to be collected, which were being properly used and thus could be left in place, and which posed minimal threat and thus did not need to be collected; specified health and safety standards for handling, securing, transporting, and disposing of sources; specified the organization responsible for collecting and securing sources in Iraq until shortly before the invasion of Iraq, nor established agreements within DOD regarding issues such as using armed private security forces to protect contractors involved in collecting and securing sources; established agreements or points of contact with DOE to determine the support that DOE could provide, including the type of expertise, equipment, and disposal facilities; identified and addressed the legal and contractual issues associated with using private contractors to assist in collecting and securing sources, including using such contractors in hostile environments; and established guidelines to utilize the skills and address security concerns associated with the use of Iraqi radiological experts. To ensure that the types of problems experienced with the planning and preparing for securing Iraqi radiological sources do not recur, we recommend that the Secretary of Defense comprehensively review DOD’s experience for lessons learned for potential future missions. In addition, to ensure that planning and preparing for potential future missions is carried out in advance, we recommend that the Secretary of Defense provide specific guidance for collecting and securing radiological sources, including integrating the objective of collecting and securing radiological sources with military combat objectives, including specifying how security protection, if needed, would be provided to the organization with responsibility for managing radiological sources and whether combat troops would be required to secure sources and provide protection for operations to collect and secure radiological sources; determining criteria to define which radiological sources (1) are of greatest risk and should be collected, (2) are being properly used and secured and thus can be left in place, and (3) pose minimal threat and thus do not need to be collected; specifying the health and safety standards, after considering how U.S. standards for handling, securing, transporting, and disposing of radiological sources were modified for use in Iraq; officially designating the organization responsible within DOD for collecting, securing, and disposing of sources and establishing agreements between that organization and other DOD organizations that may be involved with these efforts; establishing agreements and points of contact with DOE and other federal agencies, as needed, to specify the coordination, technical expertise, equipment, and facilities that may be needed to collect and secure sources in, or remove them from, a foreign country; identifying under which circumstances and for what purposes DOD will contract with private firms to conduct activities to collect and secure radiological sources, and address legal and contracting issues to ensure the timely use of contractors; and establishing guidelines concerning the role of radiological experts from the country where sources need to be collected and secured. We provided the Departments of Defense, State, and Energy with draft copies of this report for their review and comment. DOD agreed with four of our recommendations, partially concurred with two, and did not concur with two. DOD stated that it had previously addressed a number of issues identified in the recommendations and is currently addressing the others. DOD also stated that the draft report did not adequately address those efforts of the Nuclear Disablement Team (NDT) during the earlier operations in Operation Iraqi Freedom involving radiological source recovery operations. DOD stated that the focus of the draft report appeared to be largely on the elimination phase of the operation and that it accepted our recommendations in that area. Our report assessed all phases of DOD’s planning and preparing for this mission, including the experiences of the NDT and its decision to forgo collecting sources because it lacked the proper equipment. We believe our report was appropriately focused on the elimination phase because that was when most sources were collected from around Iraq. DOD partially concurred with our recommendation to develop lessons learned, indicating that lessons learned have been developed from the NDT’s experiences for the phase of the operation before DTRA began to collect sources. That effort is in line with our recommendation, but unless DOD completes a more comprehensive review, we are concerned that it will miss the experience of all relevant DOD organizations and the full range of lessons learned. DOD also partially concurred with our recommendation about integrating the objective for securing radiological sources with military combat objectives, saying that this recommendation applies only to the later phase involving DTRA’s work. However, we disagree that our recommendation applies only to DTRA’s work. As our report points out, there were problems with integrating the mission of collecting and securing sources with military combat objectives during the NDT phase of operations as well. Specifically, our report notes that during the NDT phase of operations, military commanders were left to make ad hoc decisions about recovering and securing sources, including using combat troops to guard sources. DOD’s response to this recommendation also noted problems DOD encountered in obtaining support from DOE. We believe our report adequately discusses problems DOD encountered in obtaining DOE assistance in collecting radiological sources—these problems stemmed from the lack of advanced coordination that our report recommends DOD resolve prior to any future missions. DOD also commented that our recommendation demonstrated a lack of understanding by suggesting that combat troops should be involved in handling radioactive materials. We revised our recommendation to more clearly indicate that DOD should decide whether combat troops would again be required to secure sources and protect missions to collect sources, as they did in Iraq. DOD did not concur with our recommendation concerning health and safety criteria and suggested that our recommendation was too broad and ill defined. DOD’s rationale for this response is not clear. First, DOD said that guidance is and always has been available. Then, DOD said that since Operation Iraqi Freedom was the first time in recent history that a capability was developed and deployed to counter a WMD threat, no unit level standard operating procedures existed. DOD then said that the NDT did develop procedures to “address all these issues” and that the NDT continues to work to develop changes to existing regulations to “address all these particulars.” We have clarified our recommendation to indicate that DOD, in specifying health and safety standards, should consider how U.S. health and safety standards were modified in Iraq during the mission to collect and secure sources. We continue to believe that DOD should fully implement our recommendation. Finally, DOD did not concur with our recommendation to establish the organization responsible within DOD for collecting, securing, and disposing of sources. DOD said that it had already identified this organization as the NDT and that the Commander of Strategic Command has overall responsibility for issues related to WMD, a subset of which is collecting, securing, and disposing of sources. However, based on a conversation we had in August 2005 with a DOD Joint Staff officer, Strategic Command has not yet issued its plan for combating WMD, in which the specific organization responsible for collecting, securing, and disposing of sources will be officially designated. DOD’s complete comments are reprinted in appendix III. State suggested clarifications of its current outlook for U.S. assistance to Iraq for radioactive source regulation and the reason for the delay in State’s approval of export licensing, which we have incorporated into this report. Separately, State provided technical comments, which we incorporated as appropriate. State’s written comments are reproduced in appendix IV. DOE had no written comments on the report but did state that it will work with DOD to determine criteria to define which radiological sources are of greatest risk. We are sending copies of this report to the Secretary of Defense, the Secretary of Energy, the Secretary of State, and interested congressional committees. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This report (1) assesses Department of Defense (DOD) readiness to collect and secure radiological sources in Iraq from the start of the 2003 war, (2) presents information on the number of radiological sources the Defense Threat Reduction Agency (DTRA) secured by the time of the June 2004 transition to the interim Iraqi government, (3) describes the assistance the United States has provided, and plans to provide in the future, to the Iraqi government to help regulate radiological sources in Iraq, and (4) examines DOD and Department of Energy (DOE) actions to assess their experiences in Iraq and apply any lessons learned to possible future radiological source collection missions. For our first objective, to assess DOD’s readiness to collect and secure radiological sources, we reviewed planning efforts before the war began in March 2003; concerns and efforts regarding radiological sources before DTRA began its collection mission in late 2003; relevant policy guidance; and DTRA’s preparations to collect unsecured sources in Iraq. To understand DTRA’s prewar plans, we interviewed a division chief of DTRA’s Combat Support Directorate, who prepared these plans, and other DOD officials involved in planning before the war. For concerns and efforts before DTRA began to collect sources, we interviewed the Nuclear Disablement Team commander and other team members and reviewed an unclassified report on their activities in Iraq. We also interviewed the senior chemical officer for the commander of coalition land forces who secured radiological sources in Iraq. For policy guidance, we examined two DOD policy memorandums on radiological sources in Iraq and interviewed DTRA and DOD officials involved with the development of the guidance. For specific preparations to collect sources, we interviewed DTRA officials who prepared for the mission, including the two commanders who sequentially prepared for the mission in Iraq and the DTRA director responsible for the mission. We also reviewed the contract between DTRA and its contractor, and the contract agreement between DTRA and DOE. We interviewed DTRA officials who developed and managed the contract, the DOE official who facilitated the development and execution of the contracts, and contractor’s project managers and staff. For our second objective, to present information on the number of radiological sources secured, we assessed the data reliability of five inventories of radiological sources in Iraq and summary data about the sources’ radioactivity. We asked those responsible for creating or maintaining the inventories a series of questions focused on data reliability, covering issues such as internal control procedures and the accuracy and completeness of the data. Our assessment follows: 1. We assessed the reliability of an inventory of the location, number, and type of sources in Iraq at the beginning of the war that DTRA received during its mission, and based on our work, we determined that these data were not sufficiently reliable for the purposes of this report to specify the number of sources at the beginning of the war. Because the source of this information is sensitive, we did not report its origin. DTRA officials told us they found this data to be unreliable, but it did match well with sources found at some sites. For our assessment of the data, we reviewed the inventory and interviewed key DTRA and contractor staff who worked with this information. We found major discrepancies, including duplications resulting in multiple counts of the same sources and evidence of incomplete data. Therefore, we did not use this data in our report. 2. We assessed the reliability of a May 2004 inventory of sources collected in Iraq that DTRA had created before the removal mission, and we determined that, for the purposes of this report, the inventory was not sufficiently reliable to ascertain the number and types of sources, but the inventory was reliable enough to identify the general locations of places where sources were found. To assess this data, we obtained responses to questions regarding data reliability by interviewing key DTRA and contractor staff who worked with this information. We also corroborated the data whenever possible with DOE experts and DOE’s inventories of collected sources taken to the United States and those left in Iraq. DTRA’s contractor staff told us they were unable to open some containers and counted each of them as one source. However, when DOE experts opened these containers, they found that some containers held multiple sources, increasing the count of sources from about 700 sources to about 1,400 sources. Also in the DTRA inventory, the type of radiological material was misidentified for some sources, according to DOE experts and documents. Therefore, we reported the number of sources based on DOE’s work. 3. We assessed the reliability of DOE’s inventory of the approximately 1,000 sources collected in Iraq and taken to the United States, and determined that these data were sufficiently reliable for the purposes of this report. To assess this data, we obtained responses to questions regarding data reliability by interviewing key DOE experts who worked with this information. We were told that the number of sources taken to the United States may be a close approximation, due to some instances where DOE experts relied on counts by DTRA, and therefore we reported them approximately. 4. We assessed the reliability of a DOE inventory of the approximately 400 sources collected in Iraq and remaining in Iraqi custody, and determined that these data were sufficiently reliable for the purposes of this report. To assess this data, we obtained responses to questions regarding data reliability by interviewing key DOE experts who worked with this information. They told us that the number is a close approximation, and therefore we reported it approximately. 5. We assessed the reliability of a DTRA inventory of the approximately 700 sources determined to be secured and in use in Iraq, and determined that these data were sufficiently reliable for the purposes of this report. To assess this data, we obtained responses to questions regarding data reliability by interviewing key DTRA and contractor staff who worked with this information. DTRA’s contractor staff told us they did not open the devices that contained sources and, therefore, depended on the labeling and documentation of the devices, if available, to record information about their number, type, and radioactive strength. The inventory assumed that there was one source per device, but contractor staff told us that some of these devices may have had multiple sources, and therefore we reported them approximately. To report the radioactivity of sources collected in Iraq and taken to the United States or remaining in Iraq, we depended on information provided to us in a DOE summary of the sources removed from Iraq, and determined that these data were sufficiently reliable for the purposes of this report. We discussed this data with DOE experts who worked with this information. They told us that the radioactivity of the sources taken from Iraq was accurate to within 10 percent to 20 percent of the total reported, and we therefore reported the total approximately. They also told us that the radioactivity of the collected sources remaining in Iraq was somewhat more accurate because these less-radioactive sources could be handled and measured individually, but that the total was an approximation. Therefore, we reported the total approximately. To present information on the missions performed to collect and remove radiological sources, we examined the available contractor reports on the approximately 140 missions to find and collect sources in Iraq, as well as contractor reports on the mission to remove sources from Iraq. We interviewed DTRA officers and staff and DOE experts who accompanied these missions. We also interviewed contractor staff who performed this mission and the contractor’s project manager for the mission in Iraq. For our third objective, to describe U.S. efforts to help the new Iraqi government regulate sources, we examined Department of State planning documents and a Coalition Provisional Authority order to establish an Iraqi agency to regulate radiological sources. We discussed assistance, as well as uncertainties and challenges for assisting Iraq, with officials from State and DOE. In addition, we discussed DTRA’s actions to support State’s effort to assist Iraq with DTRA officials. We also discussed efforts to secure radiological sources with the Chairman of the Iraqi Radiological Source Regulatory Authority during his visit to Washington, D.C., in March 2005; at the same meeting, we discussed efforts to search for unsecured sources with an Iraqi program director from the Ministry of Science and Technology. We interviewed State and DOE officials about their current and intended contributions to the action plan drafted in December 2004 and further discussed in March 2005 meetings. For our fourth objective, to describe what DOD and DOE have done to learn from their experience in Iraq, and how such lessons might be applied in the future, we interviewed DOD and DOE officials about their efforts to document lessons learned. We also reviewed a February 2004 National Defense University study of lessons learned from the mission to eliminate weapons of mass destruction (WMD), and discussed the study with its author. We discussed DOD’s work to assess its capability to interdict and eliminate WMD materials, including radiological sources, and reviewed the DOD memorandum initiating this effort, and held discussions with DOD planning officials. We also examined DOE’s preliminary analysis of lessons learned with DOE officials and interviewed the DOE expert who prepared it. Because of the continuing hostilities, we did not travel to Iraq. We performed our work from May 2004 through August 2005 in accordance with generally accepted government auditing standards. The Center for the Study of Weapons of Mass Destruction (WMD Center) at the National Defense University (NDU) has developed lessons and recommendations for WMD elimination operations, as the result of the Department of Defense’s (DOD) request for this study in late 2002. The WMD Center conducted meetings with DOD and interagency personnel to discuss elimination operations, and also examined prewar planning and its execution in Iraq. In February 2004, the WMD Center hosted a conference with those who had been engaged in the elimination mission in Iraq to identify lessons learned and ways to institutionalize WMD elimination capacity for the future. Major findings and key recommendations from the study were subsequently published in an NDU report. The NDU report suggests three wrong lessons from the Iraq experience that should be avoided to arrive at the correct lessons. A first wrong lesson is that Iraq is a rare situation. According to the report, since most of the United States’ potential adversaries have actual or suspected WMD capabilities and terrorists appear committed to acquiring WMD from weak, poor, or failed states, the U.S. military will likely confront WMD elimination missions as often as it engages in war. A second wrong lesson is that the failure of intelligence on WMD explains all of the failures of the WMD elimination mission. While faulty intelligence contributed to problems, the Iraq experience revealed substantial problems with DOD’s ability to eliminate WMD, including problems in planning, training and exercises, capabilities, and resources. A third wrong lesson is that elimination should not be a DOD mission, but rather should mostly be done by civilian or international organizations with the proper expertise after the military minimally secures WMD sites. Instead, the Iraq experience suggests that the U.S. military must quickly attend to finding, securing, and disposing of WMD to prevent the loss of information about WMD programs and the potential dispersal of WMD occurring in the chaos following an invasion. Even though WMD was not found, the report suggests that the Iraq experience reveals that major improvements must be made if the United States is to succeed in a possible future WMD elimination mission. For example, according to the study, DOD had not sufficiently planned and prepared for the mission to locate, secure, and dispose of WMD, in part, because DOD only began to rapidly plan for operations and develop capacities for the elimination mission in late 2002. Before the end of major combat operations, the study observed that the teams searching for WMD experienced important operational problems. One key problem was that operations had to be adjusted because existing intelligence was directing teams to suspected sites that proved to have little evidence of WMD activity. Operations thus shifted from the expected focus on WMD to a more geographically dispersed investigation of potential WMD sites. Operations also shifted toward gathering information about WMD programs, but most teams lacked sufficient training and expertise for retrieving important information contained in documents and computers as well as for interviewing Iraqis who might be knowledgeable about WMD programs. Further, the organization responsible for searching for WMD was dependent on other military commands for capabilities such as transportation, logistics, communications, linguists, and security. When these other military commands experienced competing priorities for these capabilities, shortfalls for these capabilities occurred and the search for WMD was delayed. Additionally, the extensive looting, public disorder, and uncertain security environment made the search for WMD complex, resource intensive, and dangerous. Based on the Iraq experience, the NDU report recommended that DOD develop and maintain the capability to quickly eliminate WMD in hostile environments. More specifically, the report included eight key recommendations: (1) DOD should institutionalize the WMD elimination mission, embedding it into the planning and budget process along with other tasks undertaken in combat operations. (2) To have a clear organization responsibility, DOD should create a standing military organization that is ready to perform the WMD elimination mission, including in a combat situation. Although this organization should be military, it should develop strong links with interagency and international partners, civilian experts, and the private sector. (3) DOD should be prepared to conduct this mission in an inhospitable environment and as quickly as possibly—concurrently with major combat operations, if necessary. (4) Elimination planning must assume imperfect intelligence on WMD, operations should be prepared to respond to emerging intelligence, and intelligence sharing must be improved. (5) To test plans as well as identify and address problems with procedures, the organization with WMD elimination responsibility should conduct training and exercises. (6) Rather than focusing on WMD sites, as initially occurred in Iraq, future elimination missions should target WMD programs, using a balanced examination of WMD sites, people, and documentation. (7) DOD should seek technical innovations to improve the efficiency, speed, and overall effectiveness of elimination operations. The objective is to reduce the needed manpower because it is in extreme demand before, during, and after a war, as shown in Iraq, and to address technical issues in Iraq operations, such as false readings on chemical detectors and electronic communication limitations. (8) Finally, senior-level government advocates are necessary to ensure adequate and sustained funding and prioritization to develop a significant WMD elimination capacity. In addition to the contact named above, Lee Carroll, Nancy Crothers, Davi M. D’Agostino, Dan Feehan, Peter Grana, Terry Hanford, Dave Maurer, Judy Pagano, and Keith Rhodes (GAO’s Chief Technologist) made key contributions to this report. | Following the invasion of Iraq in March 2003, concerns were raised about the security of Iraq's radiological sources. Such sources are used in medicine, industry, and research, but unsecured sources could pose risks of radiation exposure, and terrorists could use them to make "dirty bombs." This report provides information on (1) the readiness of the Department of Defense (DOD) to collect and secure sources, (2) the number of sources DOD collected and secured, (3) U.S. assistance to help regulate sources in Iraq, and (4) the lessons DOD and the Department of Energy learned. DOD was not ready to collect and secure radiological sources when the war began in March 2003 and for about 6 months thereafter. Before DOD could collect radiological sources, it had to specify criteria for which sources should be collected and how to safely collect them, coordinate within DOD, coordinate assistance from the Department of Energy (DOE), and resolve contract issues. DOD did not issue guidance for collecting and securing sources until July 2003 and did not finalize the terms of the contract to collect sources until September 2003. Until radiological sources could be collected, some sources were looted and scattered, and some troops were diverted from their regular combat duties to guard sources in diverse places. In June 2004, DOD removed about 1,000 of the 1,400 radiological sources collected in Iraq and sent them to the United States for disposal. DOD left in place approximately 700 additional sources that it had judged were adequately secured and being used properly by Iraqis. According to DOD and Department of State officials, however, the total number of radiological sources in Iraq remains unknown. The United States assisted in establishing an Iraqi agency to regulate radiological sources. Since June 2004, State and DOE have helped this new agency develop an action plan with assistance from the International Atomic Energy Agency. However, according to State officials, because of uncertainties associated with the continuing formation of the Iraqi government, State will have to monitor Iraqi efforts to ensure the continued growth and success of an independent, competent, and sustainable regulatory authority for the control of radioactive sources and materials. Both DOD and DOE are considering improvements based on their Iraq experiences. A 2004 study of lessons learned, requested by DOD, recommended that DOD develop the capability to quickly eliminate weapons of mass destruction in hostile environments, but it did not focus on the narrower radiological source mission. In contrast, DOE has contracted for a study to examine lessons from its role in removing radiological sources from Iraq. |
Each state, as well as the District of Columbia and Puerto Rico, is required to carry out a continuing, cooperative, and comprehensive statewide transportation planning process. The statewide transportation planning process addresses both urbanized and nonmetropolitan areas of the state and includes both highway and transit needs. For urbanized areas with a population of 50,000 or more, state DOTs must coordinate planning activities with MPOs—federally recognized and funded organizations representing local governments that lead transportation planning activities in metropolitan areas. To receive federal transportation funding, any project in an urbanized area must emerge from the relevant MPO and state DOT planning process. For nonmetropolitan areas not covered by an MPO, states must consult with and provide opportunities for local officials to participate in statewide planning. Some states choose to fulfill this requirement by consulting with RPOs, which are typically voluntary planning organizations that serve as a forum for local officials to develop consensus on regional transportation priorities. In some cases, RPOs may serve a wide geographic area comprising multiple rural counties whose combined population may greatly exceed 50,000. States without RPOs may consult directly with nonmetropolitan local officials with responsibility for transportation planning to fulfill their consultation requirements. To meet federal planning requirements, states must develop a long-range statewide transportation plan and a state transportation improvement program (STIP) (see fig. 1). The long-range statewide transportation plan establishes a state’s strategic vision and direction for its transportation investments for at least a 20-year period. This plan may vary in content from state to state, from a broad, policy-oriented document to a document containing specific project information. However, the plan must provide for the development and implementation of a multimodal transportation system for all areas of the state, and for public comment before it is published. Currently, there are no requirements for the long-range statewide transportation plan to include specific project information, a financial plan demonstrating how the plan is to be funded and implemented, performance measures for achieving goals, or a regularly updated schedule, and the state is not required to obtain federal approval for the plan. The STIP is the state program of transportation projects covering at least a 4-year period that are to be supported with federal surface transportation funds, as well as regionally significant projects requiring an action by FHWA or FTA, whether or not federally funded. Each project must be consistent with the long-range statewide transportation plan and approved long-range metropolitan transportation plans. The STIP must be fiscally constrained, meaning it shall include a project, or an identified phase of a project, only if full funding can reasonably be anticipated within the time period contemplated for completion of the project. Although federal planning statutes and regulations do not define specific national goals or outcomes that states should address in their planning documents, the statewide planning process must provide for the consideration and implementation of specific statutorily defined planning factors in developing both the long-range statewide transportation plan and the STIP, which include economic vitality, safety and security, accessibility and mobility, protecting and enhancing the environment, and promoting energy conservation, among others. MPOs are also required to produce a long-range transportation plan, referred to as a metropolitan transportation plan, and a transportation improvement program (TIP). The metropolitan transportation plan spans at least 20 years and includes long- and short-range strategies and actions to ensure an effective, integrated multimodal transportation system. The TIP spans at least 4 years and includes all projects in the MPO’s jurisdiction that are to receive federal surface transportation funding or that are of regional significance. The TIP must, at a minimum, be updated every 4 years, and the metropolitan transportation plan must be updated every 4 or 5 years. Both the TIP and the metropolitan transportation plan must be fiscally constrained. In addition, MPOs serving urbanized areas with a population of more than 200,000 are required to develop a congestion management process that identifies actions and strategies to reduce congestion. States participate in the metropolitan planning process by, for example, reviewing and approving the MPO’s TIP. If the state approves the TIP, the state must incorporate it, without change, into the STIP. At least every 4 years, state DOTs are required to submit an updated STIP to FHWA and FTA for review and approval, in which the state certifies that the transportation planning process has been carried out in accordance with federal planning requirements. FHWA and FTA must review each state DOT’s STIP and make a joint finding on the extent to which the STIP is based on a planning process that meets or substantially meets the federal planning requirements, including but not limited to whether the state has demonstrated fiscal constraint in the STIP, used a documented process for involving the public and consulting with nonmetropolitan local officials, and included MPO TIP projects in the STIP. USDOT is not required to review or approve long-range statewide transportation plans, but states must provide copies of any new or amended plans to USDOT for informational purposes. The Federal-Aid Highway Program is administered through a federal-state partnership. State and local governments execute the programs by matching and distributing federal funds; planning, selecting, and supervising projects; and complying with federal requirements. FHWA, through its division office in each state, delivers technical expertise and fulfills oversight functions. Federal transit programs are generally administered through a federal-local partnership, although rural programs are administered at the state level. FTA, through its headquarters and 10 regional offices, provides financial assistance, establishes requirements, performs oversight, and conducts research. Grant recipients such as local transit agencies are responsible for matching federal funds and for planning, selecting, and executing projects while complying with federal requirements. In supporting the statewide transportation planning process, FHWA provides states with the bulk of the federal funding for planning and research (see fig. 2). Through its State Planning and Research (SPR) program, FHWA provides sums equal to 2 percent of each state’s formula apportionment for several Federal-Aid Highway programs. In fiscal year 2009, FHWA provided states with a total of more than $680 million in SPR funds. FHWA regulations give states significant flexibility in applying SPR funds for planning—as long as FHWA has determined that the state has collected data that FHWA requires on the performance, condition, and use of the nation’s transportation systems, including the condition of road and pavement surfaces. These data are collected through FHWA’s Highway Performance Monitoring System (HPMS) and they constitute some of the performance data that states collect on the condition of their public roads. A state may apply up to 75 percent of its annual SPR allocation to activities of its choosing to support long- and short-range planning requirements, but generally must expend no less than 25 percent of its annual SPR funds on research, development, and technology transfer activities. State DOTs may apply their SPR funds to in-house planning activities or allocate amounts to support the planning activities of MPOs, RPOs, or other planning partners. States must document activities proposed to be accomplished with SPR funds, and FHWA must approve these activities. FTA apportions planning funds to states through its State Planning and Research Program (SPRP). As with their SPR funds, states may authorize some of their SPRP assistance to support the planning activities of MPOs, local governments, or other planning organizations. State DOTs are encouraged to provide FTA with an SPRP work program in their SPRP grant applications. In recent years, we have recommended that federal transportation programs be based on well-defined goals and that planning be more performance based and better linked to outcomes. We have previously reported that, for many surface transportation programs, goals are numerous and conflicting and federal oversight of these programs has no relationship to the performance of either the transportation system or of the grantees receiving federal funds. Performance measurement, a central component of performance-based planning, is the ongoing monitoring and reporting of program accomplishments. As our prior work has shown, measuring performance allows organizations to track the progress they are making toward their goals and gives managers crucial information on which to base their organizational and management decisions. Recently, we asked Congress to consider making federal and metropolitan transportation programs more performance based by requiring MPOs to identify specific transportation planning outcomes and requiring DOT to assess MPOs’ progress in achieving these outcomes through a certification review process. Draft legislation authorizing surface transportation programs would require USDOT to set transportation planning performance measures for MPOs and require MPOs to develop performance targets to meet those measures. In addition, we have recommended that FHWA link its activities and staff expectations to its oversight goals and measures and to develop an overall plan for its oversight activities tied to goals and measures. Through our survey and interviews, we found that state DOTs commonly conduct several research activities in developing their long-range statewide transportation plans, including developing inventories and reviewing existing transportation assets, conducting corridor studies, and using transportation demand models. In addition, most state DOTs reported that their long-range statewide transportation plans include some performance-based planning elements, such as broad goals and objectives for the state’s transportation system, but most state DOTs reported that their plans do not include other key elements, such as quantitative performance targets and project and cost information. Developing inventories and reviewing existing transportation assets. Forty-six state DOTs reported that they inventoried major elements of their existing transportation system, such as interstate highways and bridges, and 34 state DOTs reported that they reviewed the condition of existing assets to determine those with the greatest need. USDOT officials and transportation stakeholders told us that many state DOTs have focused their statewide long-range planning efforts on maintaining the condition and operation of their existing assets. States must collect pavement condition and other data and annually report these data to FHWA’s HPMS program, generally using SPR funds to pay for the data collection. States must also inspect and report on the condition of their bridges, generally every 2 years, through FHWA’s National Bridge Inspection Program. As we previously reported, many states use bridge management systems for gathering and analyzing data on bridge conditions, such as structural adequacy and safety. These systems help states manage their bridge assets and more efficiently allocate limited resources among competing priorities. For example, Pennsylvania DOT (PennDOT) and Montana DOT (MDOT) maintain road and bridge management systems to track the condition of pavement surfaces and the structural sufficiency of bridges. MDOT reported that information generated by these systems is used to track the actual performance of the highway system after investments are implemented, to show progress in meeting long-range goals. Conducting corridor studies. In our survey, 34 state DOTs reported conducting regional and statewide corridor studies for the statewide planning process. Through corridor studies, state DOTs can focus their research on roadways with critical importance by monitoring variables such as traffic flow and congestion, trip time, and crash and safety data. Federal planning regulations encourage states to consider strategies to address corridors where congestion threatens the efficient functioning of the state’s transportation system. For example, officials with Colorado DOT reported that its long-range plan is corridor-based, in that the state worked with MPOs and RPOs across the state to define a vision for each of the 350 corridor segments in the state and to establish need categories for each corridor that consider financial abilities and limitations. Using transportation demand models. In our survey, 29 state DOTs reported using a statewide transportation demand model, also known as a travel demand model, and about half of all state DOTs reported using such models to develop scenarios to inform their long-range statewide transportation plan. Used to forecast future travel demand, the models provide planners with important information on how population growth and proposed investments could affect the operation of the transportation system. Such models, however, are complex and require extensive technical capacity and current information on roadway and transit system characteristics and operations, as well as current and projected demographic information. According to stakeholders that we interviewed, some states do not have sufficient data to produce travel demand models that can be used to forecast future transportation needs across the state. Some of the highway performance data that states collect through FHWA’s HPMS program could be useful for travel demand modeling—including data on population and land area, the number of vehicle miles traveled on some public roads, and the percentage of vehicle miles traveled by various vehicle types. Officials from one MPO we interviewed reported that statewide travel demand modeling is less valuable than such modeling in MPO areas, where congestion is a greater concern. To address modeling and other technical aspects of planning, the vast majority of state DOTs (45) reported that they procured contractor services in developing their statewide long-range plans. Nearly all state DOTs reported including broad goals and objectives in their long-range statewide transportation plans, but, according to our survey, many plans do not include quantitative performance targets and project-specific information, such as fiscally constrained financial plans, project lists, and cost estimates (see fig. 3). Although federal statutes or planning regulations do not require states to include quantitative performance targets in their long-range statewide transportation plans, some states reported including them, and we have previously reported that similar targets should be included in similar strategic plans developed by federal agencies. In addition, project-specific information is not required to be included in long-range statewide transportation plans, although some states provide these elements in their plans. Broad goals and objectives. All 52 state DOTs reported including broad state transportation goals, and nearly all (50) reported including objectives in their long-range statewide transportation plans. According to USDOT, goals represent the desired outcomes for the transportation system as a whole, and objectives are specific, measurable statements that identify what is to be accomplished in order to attain the goals. Such goals and objectives in long-range statewide transportation plans should lead to strategies and investments that support the attainment of objectives. Federal planning regulations do not establish specific national goals or desired outcomes for states to address in their long-range statewide transportation plans, although states must consider specific statutorily defined planning factors in their planning process. Quantitative performance targets. Fewer state DOTs (18) reported including quantitative performance targets to measure progress in achieving state transportation goals. Although quantitative performance targets are not federally required for long-range statewide transportation plans, the Government Performance and Results Act of 1993 (GPRA) requires federal agencies in their strategic plans to develop performance goals that are objective, quantifiable, and measurable, and to establish performance measures that adequately indicate progress toward achieving those goals. Our guidance to federal agencies developing GPRA-required annual performance plans states that an agency’s performance goals and measures usually should include a quantifiable, numerical target level or other measurable value. Although not required, performance targets within long-range statewide transportation plans could provide a performance standard by which the state DOT can demonstrate to the public what effect its investment decisions are having on achieving the goals established in the plan. Similarly, 13 state DOTs reported that their long-range statewide transportation plan provides a method for the public to track progress in implementing the plan. For example, PennDOT publishes an annual implementation report that details actions for achieving plan strategies and specific responsibilities and time lines for implementing the plan. Project-specific elements. The majority of state DOTs reported that their long-range plans did not include project-specific information, such as a financial plan describing how the plan would be funded, project lists, or cost estimates. Specifically, fewer than half of all state DOTs (20) reported that their most recent long-range statewide transportation plan included a financial plan demonstrating fiscal constraint. According to federal planning regulations, a financial plan demonstrates consistency between reasonably available and projected sources of federal, state, local, and private revenues and the costs of implementing proposed transportation system improvements. Although state DOTs are not required to provide a financial plan in the long-range statewide transportation plan, federal law requires MPOs to provide this information in their long-range, metropolitan transportation plan. Fewer state DOTs (13) reported that they include in their long-range plan a list of specific projects to be programmed, or cost estimates for those projects (12). These survey results are consistent with the information provided by USDOT officials and stakeholders that we interviewed, who told us that many long-range statewide transportation plans are policy-based documents that provide broad, general goals for the state, but do not provide project-level information on how the state will achieve these goals. Similarly, federal planning regulations permit long-range statewide transportation plans to be comprised of policies, strategies, or both, but not necessarily specific projects, over the minimum 20-year forecast period. State DOT officials that we interviewed provided reasons for not including project-specific information in their long-range statewide transportation plan. For example, PennDOT officials reported that they do not include such information because they do not want to duplicate or override the projects included in metropolitan transportation plans where such elements are required. USDOT officials reported that the decision whether to provide project-specific information in long-range statewide transportation plans offers trade-offs to states. For example, including projects in long-range plans can provide a greater level of transparency into the state’s project selection process; however the public may see these plans as the final decision-making process, giving state DOTs less flexibility to alter the plan in the future. In developing a STIP—the list of projects prioritized by the state to receive federal funding over a 4-year period—state DOTs reported performing several activities to assess transportation needs and determine funding allocation amounts. After completing these activities, state DOTs reported they base their selection of projects on a range of factors, including funding availability and priorities established by the governor, as well as political and public support for specific projects. Research to assess needs. State DOTs commonly reported assessing their transportation needs by using available transportation data and by meeting with local officials in state regions—activities that they also reported performing in developing their long-range statewide transportation plans. Forty-three state DOTs reported reviewing the condition of existing transportation assets to identify those with the greatest need, and the same number of state DOTs also reported meeting with local officials in state regions to determine needs. For example, MDOT officials reported that Montana uses a “Performance Programming Process” to assess areas of need based on pavement, bridge, highway- safety and congestion data collected by the state. The planners use the data to develop an optimal funding allocation program based on needs, and district engineers, in consultation with local elected officials across the state, nominate projects for inclusion in the STIP. Allocating funding. Through our survey and interviews with state DOT officials, we found that state DOTs used a combination of approaches to determine how to allocate available funding across competing transportation needs and state regions. For example, 47 state DOTs reported allocating funding across different project types, such as bridge or road maintenance, or transit projects. Forty state DOTs reported allocating transportation funding across geographic regions based on need, and 35 reported using predetermined formulas to allocate funding to different regions in the state. Although formula allocations may help states decide how to distribute funding across competing regions, we have previously reported that the use of formulas to distribute federal highway funds to states results in federal allocations that have only an indirect relationship to needs and no relationship to performance or outcomes. In some cases, state DOTs use formula allocations that consider needs to distribute STIP funding. For example, PennDOT officials said that, as a general rule, they attempt to allocate at least 80 percent of state and federal transportation funding toward preservation and maintenance activities, while applying much of that funding toward reducing the number of structurally deficient bridges in the state. Within its bridge program, PennDOT uses formulas to distribute federal and state funding to planning regions based on the percentage of bridge deck area in the region considered to be structurally deficient, with a goal of allocating 85 percent of bridge money to improve structurally deficient bridges. State DOTs reported that they select projects for inclusion in their STIP based on a range of factors, but funding availability and political and public support were of greater importance than the results of economic analysis of a transportation project’s benefits. Economic analysis was one of the factors that state DOTs cited less often as important in selecting STIP projects (see fig. 4). In addition, state DOTs must incorporate the approved TIPs of MPOs within the state, without change, into the STIP. Funding availability. Nearly all state DOTs reported that the availability of federal and state funds was of great or very great importance in determining which projects to include in the STIP, as the amount of funding that is available determines the number and scale of projects that the state can undertake. As noted in our prior work, although transportation revenues have, until recently, increased in nominal terms, the federal and state motor fuel tax rates have not kept up with inflation; and the purchasing power in real terms of revenues generated by federal and state motor fuel taxes has been declining since 1990. Consequently, state and regional transportation decision makers in recent years have devoted more funding to highway investments that preserve, enhance, and maintain existing infrastructure than to investments that add capacity. Most state DOTs (46) also cited the availability of state or local funds to match federal funds as being of great or very great importance in selecting STIP projects. For example, West Virginia DOT (WVDOT) officials told us that WVDOT is responsible for maintaining 92 percent of the road miles in the state, and because many of the counties in the state are economically distressed they are unable to provide a local match for local road improvements. Governor’s priorities and political and public support. About two- thirds (35) of state DOTs also identified the governor’s funding priorities as a factor of great or very great importance in selecting transportation projects. For example, Pennsylvania’s governor set a goal to reconstruct or replace 1,145 bridges in the state by 2010, and PennDOT’s most recent STIP indicates that in fiscal year 2009, PennDOT allocated almost half of its STIP funding toward bridge projects. Other STIP project selection factors that more than half of state DOTs cited as being of great or very great importance were public (32) and political (30) support for specific transportation projects. For example, in interviews with officials at Washington state DOT, we learned that the state legislature increased state gas tax revenues by 5 cents per gallon in 2003 and by 9.5 cents per gallon over a 4-year period in 2005, raising about $11 billion for highway, bridge, ferry, and other improvements. To help raise support for the tax increases, state legislators needed to identify for voters the specific projects to be funded with the tax revenues, and the legislature, with assistance from the Washington state DOT, wrote the projects into the state legislation. Federal earmarks. The majority of state DOTs (27) also reported that federal earmarks, also known as congressional directives, were of great or very great importance in selecting STIP projects. In 2007, the USDOT Inspector General reported that SAFETEA-LU included a total of 7,808 earmarks for fiscal year 2006 for FHWA and FTA programs, accounting for just more than $8 billion. FHWA and FTA provide such funds through grants to state and local agencies, which then must include the earmarked projects in the STIP to be implemented. In prior work on the administration of federal earmarks within USDOT and other federal agencies, FHWA and FTA officials reported that earmarks can sometimes displace higher priority projects with lower priority projects in order to comply with these earmarks. In our review, FHWA officials in one division office told us that some projects funded through federal earmarks may circumvent the statewide planning process by funding projects that are not state priorities. In addition, federal earmarks may provide only partial or initial funding for a project, leaving the state and local governments to obtain future funding to complete a project and cover future maintenance costs. Economic analysis. In our survey, we found that economic analysis was one of the factors that state DOTs cited less often as important in selecting STIP projects (see fig. 4). Eleven state DOTs reported that the results of economic analyses of STIP projects—such as benefit-cost, cost- effectiveness, or economic-impact analysis—were of great or very great importance in selecting projects. According to FHWA guidance, economic analysis takes a long-term view of infrastructure performance and costs and enables state DOTs to target scarce resources to the best uses (those that maximize benefits to the public) and to account for their decisions. In the planning process, economic analysis can be applied with collected performance data to make project selection more performance based by screening project alternatives based on expected performance benefits—such as reductions in travel time—with expected costs for implementing an alternative. In prior work, we found that state DOT decisions about transportation investments are based on many things besides the results of economic analysis of a project’s benefits and costs, such as the availability of funding or public perception of a project. Although federal planning regulations do not specify analytical tools to be applied for evaluating project merits—nor do they require that the most cost-beneficial projects be chosen—such analyses, when combined with other selection factors, including needs expressed by the community and local officials, can result in better-informed transportation investment decision-making. USDOT has taken steps in recent years to encourage states to conduct economic analyses, including benefit-cost analysis, to plan for new transportation investments. For example, the American Recovery and Reinvestment Act of 2009 appropriated approximately $1.5 billion for competitively awarded surface transportation projects intended to have a significant impact on the nation, a metropolitan area, or a region. USDOT distributed this funding through its Transportation Investment Generating Economic Recovery (TIGER) grant program. In administering the TIGER program, USDOT generally required state and other grant applicants to conduct benefit-cost analyses that compared a project’s expected benefits to its costs, by measuring factors such as the project’s impact on fuel savings, travel time, greenhouse gas emissions, water quality, and public health. Although we have not reviewed the economic analyses performed by states as part of this work, according to USDOT, grant requests were not approved if USDOT concluded that project costs would likely exceed public benefits. State DOTs reported using several methods to consult with nonmetropolitan local officials during the statewide planning process. Many state DOTs reported consulting directly with local elected officials, while others reported relying on RPOs to consult with nonmetropolitan local officials. In some cases, states reported that they both consult directly with local elected officials and use RPOs. Direct consultation. The majority of state DOTs reported that they consult directly with nonmetropolitan local officials. For example, 39 state DOTs reported that they hold annual planning meetings with nonmetropolitan local officials in their state. For example, state DOT and local planning officials told us that these meetings may occur either in a series of formal state DOT presentations at various locations throughout the state (often referred to as road shows) or less formally through regular interactions between state DOT district engineers and local elected officials on an as-needed basis. According to local officials in three of the states we visited, the quality of this direct consultation can vary. For example, an official for an organization representing councils of government in one state said that each state DOT transportation district has a separate consultation process, which is effective in some districts but not in others. In another state, local officials said that their state DOT’s road show, which the state uses as a way to consult with local officials, was not an effective form of consultation because many of the decisions on transportation projects had already been made by state DOT headquarters officials. Consultation through RPOs. Fewer states reported using RPOs to fulfill consultation requirements or to perform specific planning consultation activities at the local level. In some cases, states have formalized their relationships with these organizations through written contractual agreements, while in other cases, they have no formal agreements in place. Almost half of all state DOTs (25) reported having written contractual agreements with RPOs to consult with local officials in nonmetropolitan areas (see fig. 5). Fifteen state DOTs reported that they gave their RPOs a role in the planning process by requiring the RPOs to develop their own long-range plans or TIPs. In addition to the 25 state DOTs that reported having written contractual agreements with RPOs, 11 state DOTs reported that other organizations conduct rural transportation planning activities in their state without a contract. (For more information on RPO characteristics and activities, see app. II.) Stakeholders and officials we interviewed offered some potential reasons for the greater prevalence of RPOs in some states and described some of the benefits of RPOs. For example, one stakeholder said that RPOs are more prevalent in nonmetropolitan regions with growing populations that require a coordinated planning effort to manage growth. A state DOT official in one state with a slowly growing population added that the state DOT does not see much need for formal consultation organizations because the state’s slow population growth creates relatively little demand for consultation on new transportation projects. Stakeholders that we interviewed reported that RPOs help state DOTs carry out their responsibility for consulting with local nonmetropolitan officials by, for example, (1) helping competing jurisdictions develop consensus on and prioritize regional transportation projects to be included in the STIP, (2) facilitating state DOT consultation with elected officials from multiple local governments, and (3) helping state DOTs better anticipate project challenges such as issues with environmental reviews for implementing projects. In our separate survey of RPOs, 63 percent reported that they were either satisfied or very satisfied that the state DOT’s consultation process gave their transportation needs sufficient consideration. In general, RPOs reported more satisfaction than their counterparts if they had helped prioritize rural projects for their area or had received planning funds or a written contractual agreement from their state DOT. Through our survey, we also asked RPOs about their participation in certain state DOT planning activities. The majority of respondents with relevant experience reported being satisfied or very satisfied with their ability to participate in several state DOT research and outreach planning activities; however, the RPOs that responded expressed lower levels of satisfaction with their participation in other activities, including those that involve prioritizing or allocating funds for rural areas (see fig. 6). RPO officials we interviewed in some states expressed varying degrees of satisfaction with their ability to participate in statewide planning activities. For example, RPO officials in one state that reported having written contractual agreements with its RPOs, reported that the state DOT was generally receptive to the projects that the RPO included in their TIP and made efforts to ensure that RPO projects were considered for funding. RPO officials in this state said that the state DOT and the RPOs work together early in the planning process to agree on the statewide funding priorities. RPOs then use this information to develop projects that address the statewide priorities. However, in another state, where the RPOs also had written contractual agreements with the state DOT, an RPO official said that, although RPOs are required to develop both long- and short- range transportation plans for their regions, the state DOT does not necessarily use their project recommendations to select STIP projects. Other RPO officials said that they did not know how the state DOT ultimately selected its STIP projects and that they were unable to influence decision-making to ensure their RPO’s needs were considered. In our survey, we asked each of the 52 state DOTs, including Washington, D.C. and Puerto Rico, to identify the top three challenges that they encountered in developing both their long-range statewide transportation plans and their STIPs. When we combined the state DOTs’ responses for both plans, two funding challenges emerged as the state DOTs’ top challenges: (1) insufficient funds from federal or state and local sources to meet their transportation project needs and (2) funding and cost uncertainty—including uncertainty forecasting future revenues and costs for implementing transportation projects. However, these funding challenges are the result, at least in part, of revenue decisions made at the state and local levels. For example, one strategy that Congress has used to meet the goals of the Federal-Aid Highway Program has been to increase federal investment. However, as we have previously reported, states and localities are permitted to use increased federal funds to substitute for or replace what they otherwise would have spent from state resources. As a result, not all of the increased federal investment has increased the total investment in highways. Transportation needs outweigh available funds. Seventeen state DOTs cited insufficient funds to meet state-defined transportation project needs as being among their most significant challenges in developing the long-range statewide transportation plan and 22 state DOTs cited insufficient funds to meet project needs as being among their most significant STIP development challenges. In both cases, the state DOTs were referring to funding available to implement projects, not to conduct statewide planning activities. DOT officials from several states said that their transportation needs outweighed their existing revenue, in part because of reduced or stagnant revenues from state gas taxes coupled with demand for maintaining aging transportation infrastructure. Several state DOTs reported that insufficient funding requires planners to make difficult trade-offs between preserving existing assets and modernizing transportation networks to address future concerns such as increased congestion or livability and mobility. FTA officials reported that because of insufficient funds for transit, there are few large transit expansion projects in development across the country. Consequently, most planning for transit occurs within the transit agencies as they look for ways to reconfigure their existing routes to adapt to population patterns and maximize service levels for existing routes. Funding and cost uncertainty. Seventeen state DOTs cited funding and cost uncertainty as a significant long-range planning challenge, and 15 state DOTs cited it as a significant STIP development challenge. In survey responses and interviews, officials from several state DOTs reported that uncertain funding levels from both federal and state sources hindered their ability to address long- and short-range planning needs. For example, officials from one state DOT reported that funding uncertainty is a particular challenge as many transportation projects span multiple years and thus require careful long-range planning to prevent exhaustion of funding prior to their completion. Officials from several state DOT’s reported that a lack of a federal surface transportation authorization also contributed to funding uncertainty. Furthermore, USDOT officials reported that some state legislatures place restrictions on how state gas tax funds may be spent, which limits states’ flexibility in allocating their limited budgets from year to year. Several other state DOTs reported that they experienced challenges developing accurate cost estimates for projects, especially when developing the STIP. For example, officials in one state reported that, until recent years, planners did not have access to useful cost estimating tools to project future project costs. Without such cost-estimating tools, officials reported that project selection and funding decisions were made outside the planning process and subject to political interests. Officials reported that the state has recently made investments to upgrade its cost-estimating capabilities to prioritize the most cost- effective and greatest need pavement and bridge projects; thereby improving the role of planning to inform project selection decisions. In addition to funding challenges, state DOTs identified several significant long-range planning challenges. Twenty state DOT’s reported that involving the public in the long-range planning process was a significant challenge. In addition, 18 state DOTs cited data limitations—including insufficient data and challenges analyzing and modeling data—as a significant long-range planning challenge. Fewer state DOTs identified prioritizing competing needs, complying with federal requirements, and other issues as significant long-range planning challenges (see fig. 7). Involving the public. Through our survey and interviews, state DOTs identified several challenges encountered in involving the public in long- range planning, as well as several activities commonly used by states to improve public involvement. First, several state DOTs reported that they experienced challenges in getting the public to attend long-range planning outreach sessions, in part because of the long-range plan’s 20-year horizon and, in some cases, a lack of project-specific information. For example, in developing its current long-range statewide transportation plan, one state DOT reported that it held about 20 public meetings and workshops across the state; however, less than a dozen members of the public attended meetings in some rural areas of the state. Another state DOT reported that the methods it used to solicit public feedback—public notices or display ads in newspapers—were ineffective because of reduced newspaper readership and constraints on spending to purchase such ads. State DOTs reported conducting a variety of activities to address the challenge of involving the public. In particular, 46 state DOTs reported maintaining a Web site to provide public information and receive public feedback on the long-range statewide transportation plan, and slightly fewer (42) reported presenting their long-range statewide transportation plan in a statewide road show. States also reported that they took steps to involve hard-to-reach populations and special interests. For example, 39 state DOTs reported that they reached out to special needs populations— including low-income, disabled, and elderly residents—and 37 state DOTs reported holding meetings with freight industry representatives on their long-range plan. To identify transportation needs for nonmetropolitan areas of the state when developing the long-range plan, 37 state DOTs reported that they tasked DOT personnel or contractors to perform this activity, and fewer (24) relied on RPOs to identify such needs. Data limitations. State DOTs identified several types of data limitations as a significant challenge in developing the long-range statewide transportation plan. Specifically, 13 state DOTs identified analyzing and modeling existing data as a significant challenge, and 5 state DOTs identified insufficient data as such a challenge. For example, 3 state DOTs reported challenges gathering or making use of truck freight data in developing the long-range statewide transportation plan, such as in segregating freight trips from passenger traffic in analyzing corridor studies. Other long-range planning data challenges identified by state DOTs include the costliness of collecting data, retaining adequate staff, a lack of analytical tools to model and analyze data, and developing and using performance measures in the long-range statewide transportation plan. Other long-range planning challenges. Among the other long-range planning challenges identified, 12 state DOTs reported that prioritizing competing needs—such as the needs of urban and rural areas—was a significant challenge. For example, in interviews with state DOT officials and other stakeholders, we learned that rural areas are likely to advocate for corridor projects or improvements to support economic development in their region, while urban areas often focus on reducing congestion or adding capacity. Eight state DOTs reported facing staffing challenges, including 2 state DOTs that reported they have insufficient staff to address the long-range statewide transportation plan among their other planning activities. In addition to funding challenges, almost half of state DOTs (22) cited complying with federal requirements, including demonstrating fiscal constraint and others, as a significant STIP development challenge. Fewer state DOTs (16) identified administrative challenges with maintaining the STIP, including updating the STIP to reflect amendments or other modifications, as a significant challenge. Other frequently mentioned STIP challenges were prioritizing competing needs—a commonly cited long- range planning challenge—and coordinating with planning partners, such as MPOs or RPOs (see fig. 8). Complying with federal requirements. A total of 22 state DOTs cited challenges related to complying with federal requirements in developing the STIP. In particular, 13 state DOTs cited challenges demonstrating fiscal constraint—a federal requirement that states demonstrate that all projects on the STIP can be implemented using committed, available, or reasonably available revenue sources. Two stakeholders that we interviewed reported that some FHWA division offices interpret the fiscal constraint rule rigidly and require states to provide very detailed cost and revenue estimates, while others allow for greater flexibility in their review to account for limitations in developing accurate estimates of future revenues and project costs. Despite the challenge that demonstrating fiscal constraint presents to state DOTs, FHWA officials reported that it serves an important accountability and transparency function in that it requires states to set reasonable expectations among MPOs and the public about which projects can be implemented given available revenues. In addition to challenges with demonstrating fiscal constraint, 9 state DOTs cited complying with other planning requirements—such as ensuring that a state’s MPOs complete required air-quality conformity analyses—as a significant challenge. Maintaining the STIP. About a third of state DOTs (16) reported that maintaining the STIP (e.g., amending the STIP as changes occur) was a significant administrative challenge. Federal planning regulations allow states to add or delete projects on the STIP or to revise project cost estimates at any time. In general, major changes to STIP project costs, initiation dates, or scope are known as amendments, and minor changes are considered administrative modifications. STIP amendments require the state DOT to provide a public comment period and demonstrate that the STIP remains fiscally constrained for FHWA and FTA approval. According to data collected by FHWA division offices, in fiscal year 2009 some states made a substantial number of amendments to their STIPs for that year. For example, FHWA’s New York Division reported that it approved more than 2,000 amendments to the New York DOT’s STIP in fiscal year 2009, and FHWA’s Pennsylvania Division office approved 500 amendments to PennDOT’s STIP for that same year. According to FHWA officials we interviewed, states often have good reasons for making such amendments—particularly in fiscal years 2009 and 2010, when states needed to plan projects for significant amounts of federal funding made available by the American Recovery and Reinvestment Act of 2009. Furthermore, some states, such as New York and Pennsylvania, have more assets and older infrastructure than other states, which could necessitate more frequent maintenance and repairs and STIP amendments, according to FHWA officials. Other STIP challenges. Almost a third of state DOTs (15) reported that prioritizing competing needs was a significant STIP development challenge—a challenge also identified by 12 states in developing their long-range statewide transportation plans, as previously reported. Fewer states cited coordinating with planning partners (11) as a significant challenge. For example, in our survey 1 state DOT reported that it has 27 planning partners, including MPOs and RPOs that develop their own TIPs and are responsible for programming some federal-aid highway funds in their own regions. The state reported that it is challenging to coordinate the development of 27 TIPs and consolidate those projects into one STIP. Other less frequently cited STIP development challenges by state DOTs include involving the public (7), delivering transportation projects on time and on budget (4), and linking planning and programming (4). USDOT has limited oversight authority over long-range statewide transportation plans. Federal planning regulations require states to continually evaluate, revise, and periodically update the long-range statewide transportation plan; however, regulations do not prescribe a schedule or time frame for those updates. In addition, although USDOT is not required to review or approve long-range statewide transportation plans, states must provide copies of any new or amended long-range statewide transportation plans to USDOT for informational purposes. This requirement differs from the requirement for MPOs in developing the long-range metropolitan transportation plan, which must be updated on a predetermined schedule every 4 or 5 years. Through our survey, we found that state DOTs vary in how often they update their long-range statewide transportation plan, and some states reported infrequent updates. Twenty-one state DOTs reported issuing an updated long-range statewide transportation plan within 5 years of their previously issued plan. However, 18 state DOTs reported taking between 6 and 10 years to update their plan, and 7 state DOTs reported taking 11 years or more to do so (see fig. 9). Five other state DOTs reported that they had issued one plan and had thus never updated that plan. Of those state DOTs that reported updating their plan at least once, the average amount of time between updates was about 7 years. However, the amount of time reported between updates varied considerably, from 2 years to as many as 18 years. years. State DOT and USDOT officials offered several reasons for infrequently updating the long-range statewide transportation plan: (1) two state DOTs reported they have insufficient staff to address the long-range statewide transportation plan among their other planning priorities; (2) one state DOT reported that it had updated its plan, but the plan was not adopted by the state’s transportation commission and legislature; and (3) USDOT officials said some states issue what they referred to as policy-based plans that are not updated regularly because they do not include projects and therefore do not change much over time. USDOT officials suggested that if state DOTs were required to include project-specific information in their plans, plans would likely be updated more regularly. State DOT and FHWA officials reported that periodic updates to the long- range statewide transportation plan offer important benefits to state DOTs and the public, including setting realistic public expectations for what the state DOT can expect to accomplish. For example, officials with one state DOT that we interviewed told us that it recently completed an update of the state’s plan issued in 2002. The update was prompted by a new governor and a review of the existing plan, which found that the plan included approximately $20 to $30 billion worth of projects that had not been funded or implemented because of insufficient revenues. In updating the plan, the state DOT focused its public outreach and consultations with local officials on setting more realistic expectations for future investments. The recently issued, updated plan includes a funding scenario based on current, flat revenue expectations, and identifies four key corridors in the state where improvements could be made, subject to additional revenues. Similarly, FHWA officials that we interviewed told us that, although a state’s long-range plan is vital for setting and communicating the state’s future transportation goals and strategies, the process of updating the long-range plan is equally important to the state DOT as the final document itself. Officials noted that states that take a committed approach to planning—such as by continually monitoring system performance, conducting ongoing research, and reaching out to the public and stakeholders—increase the likelihood of developing a plan that stakeholders will accept. While regularly updating the long-range statewide transportation plan has inherent benefits, infrequently updating it presents several risks: Infrequent updates limit USDOT’s ability to determine whether states are using federal planning funds effectively to address long-range planning needs. State DOTs receive substantial amounts of funding from FHWA and FTA for statewide planning, including funds for developing long-range statewide transportation plans. For example, in fiscal year 2009, FHWA provided $682 million in SPR funds to state DOTs to support their planning activities, including developing long-range statewide transportation plans and STIPS and annually collecting and reporting pavement condition and other data to FHWA’s HPMS program. States must document and annually report on activities that they propose to be accomplished with SPR funds, and FHWA must approve these activities. In our survey, four states reported not completing an update of their long- range statewide transportation plan between 2000 and 2009. In that 10-year period, those states received approximately $640 million in state planning funds, an average of $16 million per state over that period. Because those states did not update their long-range statewide transportation plan over that period, it is unclear how they applied SPR funding to address their long-range planning needs. It is also unclear whether the investment decisions made over that period were based on the states’ current transportation goals and strategies. Some plans may not reflect the current federal surface transportation authorization. Federal surface transportation authorization legislation creates new planning requirements and funding opportunities that states should address in their long-range statewide transportation plans. For example, through SAFETEA-LU, which was enacted in August 2005, Congress revised several federal planning provisions and established several new funding programs for state DOTs to consider in their planning process. Among these were three federal transportation programs designed to target funds to infrastructure projects that have high costs, involve national or regional impacts, and cannot easily or specifically be addressed within existing federal surface transportation programs. In responding to our survey, 10 state DOTs reported that they have not updated their long-range statewide transportation plans since 2004, prior to SAFETEA-LU’s passage. Consequently, those states’ long-range statewide transportation plans likely do not reflect amended statewide planning requirements or consider some of the new transportation programs and funding opportunities established by SAFETEA-LU. Some states’ STIPs may not be consistent with state priorities in outdated plans. According to federal planning requirements, each project included in the STIP must be consistent with the long-range statewide transportation plan. States with a long-range plan that is not periodically updated may lack a plan that has been through the public participation and consultation processes and addresses the state’s current transportation conditions or provides new strategies to address changing conditions. USDOT’s review and approval of state DOT STIPs is the primary means through which FHWA and FTA oversee the statewide planning process. As part of the STIP review process, state DOTs must submit to FHWA and FTA for joint review, at least every 4 years, an updated STIP, and in doing so, the state DOT must certify that its planning process was carried out in accordance with federal statutes and planning regulations, including the requirement for demonstrating fiscal constraint (see fig. 1). Although there is no single, established process for conducting these joint reviews, FHWA division office personnel generally lead the STIP review process with assistance from the FTA regional office, on behalf of both agencies. They do so, in part, because FHWA division offices focus on the activities of a single state DOT, whereas FTA regional offices have multiple states in their portfolio. The majority of state DOTs submit a new STIP for FHWA’s and FTA’s approval either on an annual or biannual basis, and many state DOTs amend their STIP over the course of a year, requiring FHWA and FTA to review the amended document to ensure that it remains fiscally constrained. When that review is complete, FHWA and FTA send the state DOT a letter indicating that they have approved the STIP in its entirety, approved the STIP subject to certain corrective actions, or partially approved the STIP for a portion of the state. Pursuant to federal law, USDOT’s oversight of the STIP is focused on a state DOT’s compliance with planning process requirements. In addition, USDOT’s STIP oversight does not consider transportation planning outcomes. Specifically, through the STIP review and approval process, FHWA and FTA make a joint finding on the extent to which the STIP is based on a statewide planning process that meets or substantially meets federal planning requirements—for example, by ascertaining whether the state DOT has demonstrated fiscal constraint over the 4 years covered by the STIP. However, federal statutes or planning regulations do not require states to establish or attain specific performance thresholds or outcomes in the statewide planning process, such as improving highway safety, reducing congestion, or maintaining the state of repair of a state’s transportation assets. We have previously recommended to USDOT, as well as to Congress, that adopting performance measures and goals for programs can aid in measuring and evaluating the success of the programs, thereby potentially leading to better decisions about transportation investments. As discussed in the next section of this report, some states do not have the performance measures and targets they would need to determine whether they have attained such thresholds or outcomes. According to USDOT officials and other stakeholders that we interviewed, FHWA and FTA’s joint review of a state DOT’s STIP does not evaluate the effectiveness of the state’s planning process in achieving such transportation outcomes—instead, it focuses solely on whether the state has a process in place to meet federal planning requirements and whether the state-certified STIP meets those requirements. We have previously reported that, in addition to ensuring compliance with regulations, oversight provides a means by which the federal government can ensure that federal funds are being used to achieve planned outcomes. If FHWA and FTA jointly determine and document that the submitted or amended STIP does not meet federal planning requirements, FHWA and FTA can withhold future apportioned surface transportation program funds until substantial compliance is demonstrated. However, USDOT’s internal planning guidance indicates that, in general, FHWA and FTA do not disapprove STIPs. Instead, the planning guidance indicates that the STIP is reviewed to determine if any portion of the document meets the federal requirements and can be partially approved. In our review, we examined FHWA and FTA planning findings for the most recent STIP submitted by each of the 52 state DOTs, not including amendments. We found that FHWA and FTA approved all 52 STIPs, including 35 in their entirety, 13 subject to corrective actions, and 4 partially. USDOT officials reported that in many cases, FHWA and FTA collaborate closely with the state DOT throughout the planning process and are able to address any issues that could result in a corrective action following the STIP review. As a result, FHWA and FTA officials are often familiar with the content of a STIP before they review it and the review can occur without findings. In our survey and interviews, state DOTs reported using performance measurement—specifically performance measures and targets—in the statewide transportation planning process. Overall, the majority of state DOTs reported making use of performance data in developing their long- range statewide transportation plan (32) and their STIP (36). The most commonly used performance measures and quantifiable performance targets were reported in the areas of safety and asset condition, with lower levels of usage of project delivery and mobility measures (see table 1). Not surprisingly, state DOTs also reported that safety and asset condition measures were considered to be most useful to the statewide planning process. Although many states reported using some performance measures, stakeholders and USDOT officials told us that only a select few states have made significant attempts to integrate performance measurement into their statewide planning process to inform investment decisions. Safety measures. Almost all state DOTs (50) reported using safety measures in the past 12 months, and 49 state DOTs reported having quantifiable performance targets for these measures. Fewer state DOTs (40) considered safety measures of great or very great use in the planning process. The extensive use of safety measures is due, in part, to the federal requirement that state DOTs develop a strategic highway safety plan that establishes statewide goals and objectives to reduce highway fatalities and serious injuries on all public roads. Of those state DOTs reporting that safety measures were of great or very great use, several cited crash data as being particularly useful for identifying high-crash locations or intersections and prioritizing improvements in those areas. Others reported that safety measures were used to evaluate the effectiveness of specific safety programs, such as seat belt use or motorcycle safety, or to develop the strategic highway safety plan. Asset condition measures. The vast majority of state DOTs (49) also reported using measures for the conditions of their roads, pavement, and bridges, and most of these state DOTs also reported having performance targets for these measures. The widespread availability and usage of these measures is likely related to the requirement that state DOTs collect and report data to FHWA on the condition of their roads and bridges. Forty- four state DOTs considered bridge condition measures to be of great or very great use and 42 state DOTs considered road condition measures to be of great or very great use in their planning process. DOTs reported referring to these measures to make funding allocation decisions, identify assets most in need of improvement, and prioritize competing projects. Project measures. Forty-three state DOTs reported using project cost performance measures, 42 state DOTs used project timeliness performance measures, and 39 state DOTs used performance measures on progress made in implementing the STIP. Fewer state DOTs reported having performance targets for these measures, and fewer still reported that these measures were of great or very great use in statewide planning. While states are not required to use project measures, state DOTs find that monitoring project costs and timeliness can help mitigate cost overruns and project delays. For example, officials with one state DOT said that tracking how well project costs compare to project estimates enables them to schedule a high percentage of available funding based on the state DOT’s history of delivering projects on time and on budget. Mobility measures. Overall, fewer state DOTs reported using performance measures or having performance targets for mobility measures, including vehicle congestion, truck freight, intermodal connectivity, and transit congestion (see table 1). Vehicle congestion measures were the most widely used mobility measures, with 42 state DOTs reporting using these measures and 35 state DOTs reporting having quantitative performance targets for these measures. State DOTs reporting that mobility measures were of great or very great use identified several uses for the measures, including factoring congestion data into their funding allocation models and using vehicle congestion data as a preliminary screen for determining whether to widen a road in the future. Despite efforts to use performance measures in planning, state DOTs identified several significant challenges that limit their ability to make broader use of performance measures. The challenges that state DOT’s cited most frequently in our survey as being a great or very great challenge were identifying indicators for qualitative measures such as livability, collecting data to track multimodal performance, and securing sufficient resources to develop and maintain a performance management system. Only six state DOTs reported that institutional resistance to using performance measures was a great or very great challenge to using performance measures for transportation planning (see fig. 10). Identifying indicators for qualitative measures. Forty-one state DOTs reported that identifying indicators for qualitative measures such as livability was a great or very great challenge. USDOT officials, stakeholders, and state DOTs that we interviewed reported that there is little consensus among states on how qualitative variables—such as livability, mobility, or congestion—should be defined or what indicators should be used to measure such concepts. Congestion, for example, has several widely recognized indicators—including number of cars, delay times, and person throughput—yet no single standard exists for reliably collecting or using these data to compare performance across locales. Several stakeholders and USDOT officials noted that even for commonly collected quantitative measures, such as road and pavement surface conditions, there is a lack of consensus among state DOTs on whether pavement roughness measures or other indicators, such as remaining surface life, are most useful. Collecting data to track system performance across multiple modes. Twenty-nine state DOTs reported that collecting data to track multimodal performance—such as delay times for highway and transit travel—was a great or very great challenge. Stakeholders reported that some states do not have tools and performance measures that would allow them to consider and compare investments in strategies for managing traffic and transit operations alongside investments in more conventional highway infrastructure improvements. Moreover, although state DOTs generally collect performance data to manage state-owned transportation assets, the percentage of roads that are owned and maintained by the state DOT varies across states. Furthermore, stakeholders and state DOT officials reported that states often have insufficient data on truck or freight volumes across their transportation networks that could support long- range systemwide planning. Securing sufficient resources to develop and maintain a performance management system. In our survey, 28 state DOTs reported that securing sufficient resources for a performance management system was a great or very great challenge. As noted previously, FHWA annually apportions substantial amounts of SPR funds to states for statewide planning and research activities, including collecting data on the performance and condition of public roads for FHWA’s HPMS program. In fiscal year 2009, for example, states received a total amount of more than $680 million to fund planning and research activities. Nonetheless, in our interviews, planning officials noted that collecting and maintaining such data is time- consuming and expensive. For example, states must continually collect road and bridge condition data, which may be housed in separate databases and in different data formats. Planning officials also told us that developing the internal processes to properly collect and use data to make decisions can take many years. For example, officials at the Washington state DOT said that although they began measuring transportation asset performance in the 1990s, it took them a number of years to identify the most meaningful indicators and refine the data collection and analysis procedures to enable performance-based investment decision making. The officials reported that over time, performance management processes were implemented agency wide to address all of Washington state DOT’s program and modal responsibilities. Through interviews with transportation planning stakeholders and through our expert panel, we identified several elements of a performance-based framework that offer opportunities to facilitate states’ use of performance measurement and improve the statewide planning process. Those elements include (1) national transportation goals, (2) collaboratively developed performance measures, (3) appropriate performance targets, and (4) revised federal oversight of statewide planning. Elements of this framework are also consistent with performance measurement requirements that apply to federal agencies, according to prior GAO work, and a recent FHWA report on the experience of other countries in applying performance management to transportation programs. National transportation goals. Transportation planning stakeholders we interviewed and participants in our expert panel commonly cited clear national transportation goals as a critical ingredient in performance-based planning. According to several stakeholders, national goals are necessary to provide clear policy direction for federal transportation investments. In previous work, we have noted that for many surface transportation programs, goals are numerous and conflicting, and we recommended that Congress consider refocusing surface transportation programs so that they have well-defined goals with direct links to an identified federal interest and role. FHWA’s international scan report also recommends, as a first step in developing a performance measurement program, that a limited number of high-level national transportation policy goals be articulated and linked to a clear set of measures and targets, set at the state and local levels. National goals could provide states with an articulated federal interest and help states establish specific transportation outcomes in the statewide planning process, such as improving highway safety or maintaining the state of repair of a state’s transportation assets. FHWA planning officials we interviewed said that national goals and associated performance measures would need to be incorporated into statewide and MPO long-range transportation plans to align state and local long-range priorities with national objectives. Such alignment would then be reflected in the STIPs and TIPs, which must be consistent with the long-range statewide transportation plans, making it easier for FHWA and FTA reviewers to ensure that federal surface transportation funds were being allocated to address national transportation goals. Collaboratively developed performance measures. Stakeholders and panelists also commonly said that specific performance measures, linked to national goals, should be developed in close collaboration with the state and local stakeholders responsible for implementing performance-based planning. We previously reported that seeking the involvement of stakeholders and limiting the number of performance measures to a vital few are important practices in developing and implementing successful performance management systems within federal agencies. Stakeholders told us that states and MPOs should be closely involved in developing appropriate performance measures because of the wide range of transportation contexts across states. Without a collaborative process to identify a vital set of performance measures that states and local planners can use, the federal government and states will lack assurance that the resources and effort directed to monitor performance will provide useful information to the federal government on the overall condition of the nation’s transportation system. While our previous work indicates that obtaining agreement among competing stakeholders in developing performance management systems is not easy, officials in one state DOT that we interviewed cited USDOT’s efforts to collaborate with states on the development of appropriate performance measures. Specifically, USDOT’s National Highway Traffic Safety Administration (NHTSA) partnered with the Governors Highway Safety Association (GHSA)—which represents states’ highway safety offices—to jointly develop traffic safety performance measures for states to use in their strategic highway safety plans. NHTSA and GHSA brought state and local stakeholders together to develop and agree on a minimum set of 14 performance measures for states to use in developing and implementing behavioral highway safety plans and programs. Participants in our expert panel suggested that FHWA and FTA could bring a national perspective and technical expertise to help states develop appropriate measures, particularly for emerging measures such as livability—a challenge that, as noted, state DOTs identified as limiting greater use of performance measurement in planning. Appropriate performance targets. Stakeholders we interviewed and our expert panelists expressed various opinions on the value and implementation of performance targets in statewide planning. The Office of Management and Budget’s guidance to federal agencies on implementing GPRA requires federal agencies to set performance goals that include performance targets and time frames, as part of the annual performance plans that federal agencies develop to show progress in achieving goals. Our prior work has shown that performance targets help promote accountability and allow organizations to track their progress toward goals and give managers important information on which to base their organizational and management decisions. However, several panelists said that if performance targets were set at the federal level and if federal funding allocations were contingent on achieving those targets, states could be penalized for not achieving outcomes that could be beyond their direct control. Other panelists indicated that targets could be useful if linked to performance incentives rather than penalties, and established at the state or local level in consultation with the federal government. According to FHWA’s international scan report, among the countries examined it was common for different levels of government to set performance targets jointly and collaborate on ways to achieve targets, rather than for one level of government to set a target and then penalize another for missing it. Revised federal oversight of statewide planning. As previously noted in this report, FHWA and FTA’s joint oversight of statewide planning focuses on state DOTs’ compliance with planning process requirements and does not consider transportation planning outcomes. Several stakeholders and panelists told us that this process-oriented oversight is of limited value to state DOTs in improving the effectiveness of statewide planning. USDOT officials reported that a performance-based planning framework would require legislative changes to transition USDOT’s statewide planning oversight role to focus on transportation outcomes, such as whether states are making progress in improving highway safety or maintaining the nation’s transportation assets in a state of good repair. USDOT’s recent international scan report found that linking national goals to state or regional performance measures appeared to create a strong focus on outcomes instead of process among the nations reviewed. Additionally, panelists reported that regular reporting by state DOTs to USDOT on progress made in achieving outcomes could improve communication between the states and the federal government, and enable USDOT to provide technical assistance as states’ need for it becomes apparent. Although implementing a performance-based framework will not be easy, our state DOT survey results suggest that many state DOTs could be receptive to increasing their use of performance measurement. In prior work, we have noted that the ultimate benefit of collecting performance information—improved decision making and results—is only fully realized when this information is used to support management planning and decision-making functions. Our work evaluating the extent to which federal agencies’ use performance information to make decisions demonstrates that such organizational change does not occur quickly. However, as previously noted, only six state DOTs in our survey reported that institutional resistance to using performance measures was a great or very great challenge to using performance measures for transportation planning. Given the progress some state DOTs have already made in using performance-measurement, other state DOTs may be well-positioned to move toward a performance-based planning framework. Statewide transportation planning is an important process for deciding how to spend substantial amounts of federal surface transportation funds—almost $46 billion in fiscal year 2009. However, the current, statewide transportation planning framework does not provide the federal government with sufficient information to ensure that states’ planning activities are contributing to improved transportation outcomes—such as improving the state of repair of transportation assets—and that states are fully considering the long-range needs of the nation’s transportation infrastructure. For example, because federal oversight of statewide planning focuses on process, rather than specific transportation outcomes, it is unclear whether states’ investment decisions are improving the condition and performance of the nation’s transportation system. A performance-based planning framework offers opportunities to focus statewide planning on achieving transportation outcomes. Encouragingly, our state DOT survey results suggests that states have already taken some important steps in this direction by setting broad goals in their long-range statewide transportation plans and using performance measures and targets to monitor the safety and condition of many roads and bridges. However, some long-range statewide transportation plans are infrequently updated, and individual state efforts toward performance-based planning are not part of a coordinated federal approach. As a result, the federal government has limited ability to measure the results of its investment in statewide planning. Nonetheless, our results suggest that many states could be ready to transition to a performance-based planning framework, with the appropriate assistance and collaboration of the federal government. USDOT, through NHTSA, has experience working with states to make states’ strategic highway safety plans more performance based. This experience could be useful to FHWA, FTA, and states as they endeavor to address both national and state transportation concerns in a performance-based planning framework. As Congress moves forward with reauthorizing federal surface transportation programs, it has an opportunity to take the legislative action needed to shift to a performance- based approach for statewide planning and oversight, through which the federal government, states, and local planners can collaboratively address their transportation concerns. Congress should consider transitioning statewide transportation planning and oversight toward a more performance-based approach. Actions to accomplish this transition could include identifying specific transportation outcomes for states to address in statewide transportation planning and charging USDOT with assessing states’ progress in achieving these outcomes through its STIP review and approval process, requiring states to update their long-range statewide transportation plans on a prescribed schedule to ensure the effective use of federal planning funds and to address statewide planning outcomes, and requiring USDOT and states to collaboratively develop appropriate performance measures to track progress in achieving planned transportation outcomes. We provided a draft of this report to USDOT for review and comment on November 5, 2010. USDOT officials provided technical comments which we incorporated into the report, as appropriate. We are sending copies of this report to interested congressional committees and the Secretary of Transportation. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To identify the planning activities conducted by state departments of transportation (state DOT) we reviewed federal statutes and regulations governing the statewide planning process and conducted a Web-based survey of 52 state DOTs, including those in Puerto Rico and the District of Columbia. To identify survey participants, we used contacts provided by the American Association of State Highway Transportation Officials’ (AASHTO) Standing Committee on Planning, current as of March 4, 2010. In designing the survey questions, we interviewed a range of transportation policy stakeholders, including state DOT planning officials and officials with the Federal Highway Administration (FHWA) and the Federal Transit Administration (FTA), and we consulted GAO staff with appropriate subject-matter expertise. In addition, we conducted three pretests of the survey of state DOTs and obtained feedback on the survey from two external planning experts and from FHWA and FTA officials to ensure that the questions were clear and did not place an undue burden on officials, that the terminology was used correctly, and that the questionnaire was comprehensive and unbiased. We made changes to the content and format of the questionnaire based on their feedback. We conducted the survey from April 20, 2010, to June 18, 2010, and received responses from all 52 state DOTs, for a 100 percent response rate. The complete results of the state DOT survey can be found at GAO-11-78SP. Because we administered the state DOT survey to the complete universe of potential respondents, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed can introduce unwanted variability into the survey results. We encountered a nonsampling survey error in analyzing the state DOT survey responses. Specifically, in some instances, respondents provided conflicting, contradictory, or unnecessary information in portions of the survey. We addressed these errors by contacting the state DOT officials involved and clarifying their responses. To obtain more in-depth information on state DOT statewide planning activities, we reviewed planning documents from and interviewed officials in six states: Louisiana, Montana, Pennsylvania, Texas, Washington state, and West Virginia. In each state we interviewed officials from state DOTs, FHWA division offices, FTA regional offices, rural planning organizations (RPO), metropolitan planning organizations (MPO), and, when present, tribal planning organizations. To ensure that we identified a range of states for our case studies, we considered recommendations from transportation planning stakeholders; the percentage of road miles owned by the state; the presence of MPOs in the state and the percentage of the population covered by MPOs; the presence of federally recognized tribes; and the representation of FTA regions. These criteria allowed us, in our view, to obtain information from a diverse mix of state DOTs and other state planning organizations, but the findings from our case studies cannot be generalized to all states because the states selected were part of a nonprobability sample. We used information obtained during the case studies throughout this report. To gather information on the extent to which RPOs are satisfied that rural needs are considered in statewide planning, we conducted a second Web- based survey of regional planning and development organizations from all 50 states. We sent surveys to the 564 organizations in a database collected by the National Association of Development Organizations that included a range of different types of organizations that conduct regional planning activities, including RPOs, councils of government (COG), regional planning commissions, economic development agencies, county and city planning offices, and others similar organizations. Because the database did not contain organizations in Delaware, Hawaii, and Rhode Island, we identified a total of five organizations from those states that conduct regional planning activities and sent surveys to those organizations. Because the National Association of Development Organizations database includes organizations that conduct a variety of regional planning activities, including transportation planning, we asked each surveyed organization to identify the specific planning activities that it performs. In this report, we provided information only from those organizations that reported that they coordinate or conduct surface transportation or transit planning in the nonmetropolitan areas of their region. For the purposes of this report, organizations that indicated that they perform this activity are considered RPOs. To ensure the reliability of the database, we spoke with National Association of Development Organizations officials about the characteristics of the database and determined that it was sufficiently reliable for our needs. In developing the survey questions, we interviewed transportation planning stakeholders and pretested the survey with a total of five RPOs in four states to determine that the questions were clear and did not place an undue burden on officials, that the terminology was used correctly, and that the questionnaire was comprehensive and unbiased. We made changes to the content and format of the questionnaire based on their feedback. We conducted the survey from May 17, 2010, to June 25, 2010, and received completed surveys from 72 percent of the organizations surveyed. The complete results of this survey can be found at GAO-11-78SP. To gather information on the challenges that state DOTs face in the statewide transportation planning process, we relied primarily on data collected in the state DOT survey, in which we asked state DOT respondents to identify through open-ended responses the three most significant challenges encountered in developing both the long-range statewide transportation plan and the state transportation improvement program (STIP). We then performed a content analysis on the open-ended question responses through the following process. We identified a total of 13 categories of challenges identified by state DOTs in their responses, including funding, stakeholder involvement, and staffing, among others. We developed a codebook that defined each category, and two GAO analysts independently assigned codes to each response. Afterwards, the analysts met to resolve any differences in their coding until they reached consensus. We then removed duplicate responses—instances in which a state DOT reported the same challenge for the same plan more than once—to ensure that only unique challenges reported by state DOTs were reported in our analysis. Finally, we analyzed the coded responses to determine how many state DOTs encountered each challenge in developing both the long-range statewide transportation plan and the STIP. To obtain information on FHWA’s and FTA’s approach to overseeing statewide transportation planning, we interviewed FHWA and FTA officials in headquarters and in the six states where we interviewed state DOT officials (Louisiana, Montana, Pennsylvania, Texas, Washington State, and West Virginia). Specifically, we interviewed officials in the six FHWA division offices in the six states and in the four FTA regional offices with responsibility for those states (FTA regions 3, 6, 8, and 10). We also reviewed FHWA and FTA planning guidance and the planning findings from FHWA and FTA’s joint review of each state DOT’s most recent STIP, to determine what joint action FHWA and FTA took following their review. To identify the extent to which state DOTs are using performance measurement for planning and opportunities to make statewide planning more performance based, we analyzed data collected through our state DOT survey and interviews with state DOT officials. We also contracted with the National Academy of Sciences to convene a balanced, diverse panel of 14 experts to discuss performance measurement in statewide transportation planning. We worked closely with the National Academy’s Transportation Research Board to identify and select panelists with experience in the implementation of performance measurement in, and knowledge of, the statewide transportation planning processes. The panelists convened in Minneapolis, Minnesota, on July 14, 2010, and their discussion was divided into three moderated subsessions. The subsessions addressed the appropriate roles of the federal government and the states in making statewide planning more performance based, how performance measures could be used to better link statewide planning to programming decisions, and the advantages and disadvantages of linking federal funding to achieving transportation performance goals. The moderator facilitated a discussion among the panelists to gather their perspectives on each topic. In keeping with the National Academy’s policy, the panelists were invited to provide their individual views, and the panel was not designed to reach a consensus on any of the issues that we asked the panelists to discuss. Results of the discussions were used to inform key elements of a framework to make statewide transportation planning more performance based. We did not verify the panelists’ statements. The views expressed by the panelists do not necessarily represent the views of GAO or the National Academy. Participants in the expert panel are listed in table 2. The expert discussion cited in this report should be interpreted in the context of two key limitations and qualifications. First, although we were able to secure the participation of a balanced, highly qualified group of experts, other experts in this field could not be included because we needed to limit the size of the panel. Although many points of view were represented, the panel was not representative of all potential views. Second, even though GAO, in cooperation with the National Academy, conducted preliminary research and heard from national experts in their fields, a day’s conversation cannot represent the current practice in this vast area. More thought, discussion, and research must be done to develop greater agreement on what we really know, what needs to be done, and how to do it. These two key limitations and qualifications provide contextual boundaries. Nevertheless, the panel provided a rich dialogue on making statewide transportation planning more performance based, and the panelists provided insightful comments in responding to the questions they were asked. To gather additional information related to all of our research objectives, we interviewed a range of transportation planning stakeholders representing state, local, and private-sector groups, including AASHTO, the Association of Metropolitan Planning Organizations, the Bipartisan Policy Center, Cambridge Systematics Inc., the I-95 Corridor Coalition, the National Association of Counties, the National Association of Development Organizations, the National Association of Regional Councils, and National Academy’s Transportation Research Board. We conducted this performance audit from October 2009 through December 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. RPOs that we surveyed reported information on the following topics discussed in this appendix: (1) RPO service areas, (2) planning funds received by RPOs, (3) planning activities performed by RPOs, (4) needs of nonmetropolitan areas served by RPOs, (5) activities performed by state DOTs to consult with RPOs, and (6) RPOs’ satisfaction with their state DOT’s consultation activities. The complete results of this survey are available in GAO-11-78SP. In general, RPOs are voluntary organizations of local elected officials and representatives of local transportation systems that serve nonmetropolitan areas not represented by a metropolitan planning organization (MPO). MPOs represent urbanized areas with more than 50,000 people. However, because RPOs may serve multiple nonmetropolitan municipalities, the population of the combined RPO service areas may be greater than 50,000. In our survey, we found that the size of the population serviced by RPOs varied greatly, with 57 percent reporting a population smaller than 150,000 and 43 percent reporting a population in their service area greater than 150,000. RPOs reported, on average, that they serve 5 counties and 36 municipalities, such as cities, towns, or villages. RPOs reported that the number of full-time staff performing rural transportation planning averaged 2 and ranged from 0 to 18. RPOs may receive state or federal funding to conduct nonmetropolitan planning (see table 3). Of RPOs responding to our survey, 80 percent reported receiving funding in fiscal year 2009 from their state DOT. In addition, 41 percent reported that their state DOT provided them with state planning and research (SPR) funds from FHWA and 14 percent reported that their state DOT provided them with State Planning and Research Program (SPRP) funds from FTA. We asked RPOs about the types of planning activities that they conduct. Of the 15 activities RPOs were asked about, 12 were performed by more than 50 percent of RPOs (see fig. 11). More than 80 percent of RPOs reported gathering or coordinating input from public and local officials; conducting community planning activities, such as improving accessibility for seniors and disabled persons; and providing technical assistance to local governments, such as Geographic Information System mapping or transportation modeling. Most RPOs reported conducting other types of planning, such as bike and pedestrian, land-use, and transit service planning, among others. About a third to a half of RPOs reported planning for different modes, freight, or air quality and emissions and conducting other planning activities, such as tribal transportation planning, demographic forecasting, and scenic byway planning. According to RPOs, maintaining or improving existing roads and bridges and safety are their highest-priority transportation needs. Specifically, 60 to 90 percent of RPOs reported that maintaining or improving existing roads, maintaining or improving existing bridges, and improving the safety of existing assets were of higher priority (see fig. 12). Twenty-six to 29 percent reported that higher-priority needs for their region include improving public transit, such as by reducing congestion or improving accessibility; other needs, such as bike and pedestrian trails, and economic development; and improving truck freight mobility. Less than 20 percent of RPOs reported that improving rail freight mobility, increasing road or highway capacity to address congestion, or improving air quality, such as by reducing surface transportation emissions, were higher-priority needs. State DOTs are required by federal guidelines to have a documented process in place to consult with nonmetropolitan areas of the state on transportation planning issues. We asked RPOs if, over the past year, their state DOT performed selected activities to consult with them in the statewide planning process. RPOs reported that their state DOT performed a wide range of different activities to consult with them. For example, 81 percent of RPOs reported that their state DOT provided funding to their region to conduct surface transportation planning activities (see fig. 13). Thirty-five percent of RPOs reported that the state DOT asked local officials to serve on policy-making or advisory boards, or to participate in other activities, such as state DOT meetings to discuss specific regional projects. Overall, 63 percent of RPOs reported being satisfied or very satisfied that their state DOT sufficiently considers their region’s needs (see page 25 of this report). However, fewer RPOs reported being satisfied or very satisfied with their ability to participate in specific state DOT consultation activities (see fig. 14). More than 50 percent of RPOs reported being satisfied or very satisfied with their ability to participate in state DOT activities that gather public input in the statewide planning process, conduct transportation studies, develop portions of statewide long-range transportation plans, or select rural projects in their area to be included in the STIP. Between 30 to 46 percent of RPOs reported being satisfied or very satisfied with state DOT activities that determine the transportation funding priorities for rural areas; allocate federal planning funds to rural areas; set performance goals, measures, or targets for their area; and develop transportation models to inform decisions. Overall, 16 percent of RPOs reported being dissatisfied or very dissatisfied with their state DOT sufficiently considers their region’s needs. Dissatisfaction with specific state DOT planning activities ranged from 13 to 24 percent (see fig. 14). RPOs most frequently reported dissatisfaction with state DOT activities related to determining the transportation priorities for rural areas of the state. Specific reasons RPOs cited for dissatisfaction vary, but include feeling that their needs are not prioritized, that there is a lack of support for rural planning, and that information gathered through consultation activities is not used to inform the statewide planning process. In addition to the individual named above, Sara Vermillion (Assistant Director), Matt Barranca, Richard Brown, Elizabeth Curda, Brad Dubbs, Elizabeth Eisenstadt, Kathleen Gilhooly, Georgeann Higgins, Hannah Laufe, Jillian McMichael, Jean McSween, Sara Ann Moessbauer, Jay Smale, and Don Watson made key contributions to this report. | Through the statewide transportation planning process, states decide how to spend federal transportation funds--almost $46 billion in fiscal year 2009. Draft legislation to reauthorize federal surface transportation legislation would, among other things, revise planning requirements to recognize states' use of rural planning organizations (RPO) and require performance measurement. As requested, GAO examined (1) states' planning activities and RPOs' satisfaction that rural needs are considered, (2) states' planning challenges, (3) the U.S. Department of Transportation's (USDOT) approach to overseeing statewide planning, and (4) states' use of performance measurement and opportunities to make statewide planning more performance based. GAO analyzed planning documents; surveyed departments of transportation in 50 states, Puerto Rico, and Washington, D.C., and 569 RPOs; interviewed officials in 6 states; and held an expert panel on performance-based planning. States conduct a variety of long- and short-range planning activities, and the majority of RPOs surveyed reported being generally satisfied that rural needs are considered. To develop required long-range statewide transportation plans (long-range plans), states conduct research activities, such as inventorying assets and modeling traffic. While the resulting plans generally include some performance elements, such as goals, many plans do not include performance targets. Such targets are not required, but prior GAO work shows that targets are useful tools to indicate progress toward achieving goals. To develop required short-range plans--state transportation improvement programs (STIP)--states assess needs and determine funding allocations. However, in selecting projects, states assigned greater importance to factors such as political and public support than to economic analysis of project benefits and costs. While the majority of surveyed RPOs reported being satisfied that their rural needs were considered, some RPOs reported less satisfaction with their role in allocating funds for rural areas. States commonly cited insufficient or uncertain funding to implement transportation projects among the primary challenges to long- and short-range planning. States also reported that involving the public and addressing transportation data limitations were significant long-range planning challenges. Short-range planning challenges included meeting federal requirements to demonstrate the availability of sufficient project funding and to update the STIP to reflect changes. USDOT has a limited role in the oversight of long-range plans, and pursuant to federal law, its STIP oversight focuses on states' compliance with procedures. Furthermore, USDOT is not required to review long-range plans, states are not required to update them on a schedule, and some states reported infrequent updates. For example, 10 states reported not updating plans since the most recent surface transportation authorization in 2005. Limited USDOT oversight and infrequent updates present risks, including the ineffective use of federal planning funds. For the STIP, USDOT's oversight focuses, as required, on states' compliance with federal planning procedures. Information on whether states achieve outcomes such as reducing congestion is limited. While states are not required to set performance outcomes in planning, most states reported using performance measurement in the areas of safety and asset condition. Several challenges limit broader use of performance measures, including identifying indicators for qualitative measures such as livability and collecting data across transportation modes. Through our expert panel and interviews, we identified several elements that could improve states' use of performance measures, including national goals, federal and state collaboration on developing performance measures, appropriate targets, and revised federal oversight focusing on monitoring states' progress in meeting outcomes. To make statewide planning more performance based, Congress should consider requiring states to update their long-range plans on a prescribed schedule, identifying outcomes for statewide planning and directing USDOT to assess states' progress in achieving them, and requiring USDOT and states to collaboratively develop performance measures. USDOT provided technical comments which we incorporated into the report as appropriate. |
SBA was established by the Small Business Act of 1953 to fulfill the role of several agencies that previously assisted small businesses affected by the Great Depression and, later, by wartime competition. SBA’s stated purpose is to promote small business development and entrepreneurship through business financing, government contracting, and technical assistance programs. In addition, SBA serves as a small business advocate, working with other federal agencies to, among other things, reduce regulatory burdens on small businesses. SBA also provides low-interest, long-term loans to individuals and businesses to assist them with disaster recovery through its Disaster Loan Program—the only form of SBA assistance not limited to small businesses. Homeowners, renters, businesses of all sizes, and nonprofit organizations can apply for physical disaster loans for permanent rebuilding and replacement of uninsured or underinsured disaster-damaged property. Small businesses can also apply for economic injury disaster loans to obtain working capital funds until normal operations resume after a disaster declaration. SBA’s Disaster Loan Program differs from the Federal Emergency Management Agency’s (FEMA) Individuals and Households Program (IHP). For example, a key element of SBA’s Disaster Loan Program is that the disaster victim must have repayment ability before a loan can be approved whereas FEMA makes grants under the IHP that do not have to be repaid. Further, FEMA grants are generally for minimal repairs and, unlike SBA disaster loans, are not designed to help restore the home to its predisaster condition. In January 2005, SBA began using DCMS to process all new disaster loan applications. SBA intended for DCMS to help it move toward a paperless processing environment by automating many of the functions staff members had performed manually under its previous system. These functions include both obtaining referral data from FEMA and credit bureau reports, as well as completing and submitting loss verification reports from remote locations. Our July 2006 report identified several significant limitations in DCMS’s capacity and other system and procurement deficiencies that likely contributed to the challenges that SBA faced in providing timely assistance to Gulf Coast hurricane victims as follows: First, due to limited capacity, the number of SBA staff who could access DCMS at any one time to process disaster loans was restricted. Without access to DCMS, the ability of SBA staff to process disaster loan applications in an expeditious manner was diminished. Second, SBA experienced instability with DCMS during the initial months following Hurricane Katrina, as users encountered multiple outages and slow response times in completing loan processing tasks. According to SBA officials, the longest period of time DCMS was unavailable to users due to an unscheduled outage was 1 business day. These unscheduled outages and other system-related issues slowed productivity and affected SBA’s ability to provide timely disaster assistance. Third, ineffective technical support and contractor oversight contributed to the DCMS instability that SBA staff initially encountered in using the system. Specifically, a DCMS contractor did not monitor the system as required or notify the agency of incidents that could increase system instability. Further, the contractor delivered computer hardware for DCMS to SBA that did not meet contract specifications. In the report released in February, we identified other logistical challenges that SBA experienced in providing disaster assistance to Gulf Coast hurricane victims. For example, SBA moved urgently to hire more than 2,000 mostly temporary employees at its Ft. Worth, Texas disaster loan processing center through newspaper and other advertisements (the facility increased from about 325 staff in August 2005 to 2,500 in January 2006). SBA officials said that ensuring the appropriate training and supervision of this large influx of inexperienced staff proved very difficult. Prior to Hurricane Katrina, SBA had not maintained the status of its disaster reserve corps, which was a group of potential voluntary employees trained in the agency’s disaster programs. According to SBA, the reserve corps, which had been instrumental in allowing the agency to provide timely disaster assistance to victims of the September 11, 2001 terrorist attacks, shrank from about 600 in 2001 to less than 100 in August 2005. Moreover, SBA faced challenges in obtaining suitable office space to house its expanded workforce. For example, SBA’s facility in Ft. Worth only had the capacity to house about 500 staff whereas the agency hired more than 2,000 mostly temporary staff to process disaster loan applications. While SBA was able to identify another facility in Ft. Worth to house the remaining staff, it had not been configured to serve as a loan processing center. SBA had to upgrade the facility to meet its requirements. Fortunately, in 2005, SBA was also able to quickly reestablish a loan processing facility in Sacramento, California, that had been previously slated for closure under an agency reorganization plan. The facility in Sacramento was available because its lease had not yet expired, and its staff was responsible for processing a significant number of Gulf Coast hurricane related disaster loan applications. As a result of these and other challenges, SBA developed a large backlog of applications during the initial months following Hurricane Katrina. This backlog peaked at more than 204,000 applications 4 months after Hurricane Katrina. By late May 2006, SBA took about 74 days on average to process disaster loan applications, compared with the agency’s goal of within 21 days. As we stated in our July 2006 report, the sheer volume of disaster loan applications that SBA received was clearly a major factor contributing to the agency’s challenges in providing timely assistance to Gulf Coast hurricane. As of late May 2006, SBA had issued 2.1 million loan applications to hurricane victims, which was four times the number of applications issued to victims of the 1994 Northridge, California, earthquake, the previous single largest disaster that the agency had faced. Within 3 months of Hurricane Katrina making landfall, SBA had received 280,000 disaster loan applications or about 30,000 more applications than the agency received over a period of about 1 year after the Northridge earthquake. However, our two reports on SBA’s response to the Gulf Coast hurricanes also found that the absence of a comprehensive and sophisticated planning process contributed to the challenges that the agency faced. For example, in designing DCMS, SBA used the volume of applications received during the Northridge, California, earthquake and other historical data as the basis for planning the maximum number of concurrent agency users that the system could accommodate. SBA did not consider the likelihood of more severe disaster scenarios and, in contrast to insurance companies and some government agencies, use the information available from catastrophe models or disaster simulations to enhance its planning process. Since the number of disaster loan applications associated with the Gulf Coast hurricanes greatly exceeded that of the Northridge earthquake, DCMS’s user capacity was not sufficient to process the surge in disaster loan applications in a timely manner. Additionally, SBA did not adequately monitor the performance of a DCMS contractor or stress test the system prior to its implementation. In particular, SBA did not verify that the contractor provided the agency with the correct computer hardware specified in its contract. SBA also did not completely stress test DCMS prior to implementation to ensure that the system could operate effectively at maximum capacity. If SBA had verified the equipment as required or conducted complete stress testing of DCMS prior to implementation, its capacity to process Gulf Coast related disaster loan applications may have been enhanced. In the report we issued in February, we found that SBA did not engage in comprehensive disaster planning for other logistical areas—such as workforce or space acquisition planning—prior to the Gulf Coast hurricanes at either the headquarters or field office levels. For example, SBA had not taken steps to help ensure the availability of additional trained and experienced staff such as (1) cross-training agency staff not normally involved in disaster assistance to provide backup support or (2) maintaining the status of the disaster reserve corps as I previously discussed. In addition, SBA had not thoroughly planned for the office space requirements that would be necessary in a disaster the size of the Gulf Coast hurricanes. While SBA had developed some estimates of staffing and other logistical requirements, it largely relied on the expertise of agency staff and previous disaster experiences—none of which reached the magnitude of the Gulf Coast hurricanes—and, as was the case with DCMS planning, did not leverage other planning resources, including information available from disaster simulations or catastrophe models. In our July 2006 report, we recommended that SBA take several steps to enhance DCMS, such as reassessing the system’s capacity in light of the Gulf Coast hurricane experience and reviewing information from disaster simulations and catastrophe models. We also recommended that SBA strengthen its DCMS contractor oversight and further stress test the system. SBA agreed with these recommendations. I note that SBA has completed an effort to expand DCMS’s capacity. SBA officials said that DCMS can now support a minimum of 8,000 concurrent agency users as compared with only 1,500 concurrent users for the Gulf Coast hurricanes. Additionally, SBA has awarded a new contract for the project management and information technology support for DCMS. The contractor is responsible for a variety of DCMS tasks on SBA’s behalf including technical support, software changes and hardware upgrades, and supporting all information technology operations associated with the system. In the report released in February, we identified other measures that SBA had planned or implemented to better prepare for and respond to future disasters. These steps include appointing a single individual to coordinate the agency’s disaster preparedness planning and coordination efforts, enhancing systems to forecast the resource requirements to respond to disasters of varying scenarios, redesigning the process for reviewing applications and disbursing loan proceeds, and enhancing its long-term capacity to acquire adequate facilities in an emergency. Additionally, SBA had planned or initiated steps to help ensure the availability of additional trained and experienced staff in the event of a future disaster. According to SBA officials, these steps include cross-training staff not normally involved in disaster assistance to provide back up support, reaching agreements with private lenders to help process a surge in disaster loan applications, and reestablishing the Disaster Active Reserve Corps, which had reached about 630 individuals as of June 2007. While SBA has taken a variety of steps to enhance its capacity to respond to disasters, I note that these efforts are ongoing and continued commitment and actions by agency managers are necessary. In June 2007, SBA released a plan for responding to disasters. While we have not evaluated the process SBA followed in developing its plan, according to the SBA plan, the agency is incorporating catastrophe models into its disaster planning processes as we recommended in both reports. For example, the plan states that SBA is using FEMA’s catastrophe model, which is referred to as HAZUS, in its disaster planning activities. Further, based on information provided by SBA, the agency is also exploring the use of models developed by private companies to assist in its disaster planning efforts. These efforts to incorporate catastrophe models into the disaster planning process appear to be at an early stage. SBA’s plan also anticipates further steps to ensure an adequate workforce is available to respond to a disaster, including training and using 400 non- disaster program office staff to assist in responding to the 2007 hurricane season and beyond. According to SBA officials, about 200 of these staff members will be trained in reviewing loan applications and providing customer service by the end of this month and the remainder will be trained by this Fall. We encourage SBA to actively pursue initiatives that may further enhance its capacity to better respond to future disasters, and we will monitor SBA’s efforts to implement our recommendations. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions at this time. For further information on this testimony, please contact William B. Shear at (202) 512- 8678 or [email protected]. Contact points for our Offices of Congressional Affairs and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony included Wesley Phillips, Assistant Director; Triana Bash; Alison Gerry; Marshall Hamlett; Barbara S. Oliver; and Cheri Truett. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Small Business Administration (SBA) helps individuals and businesses recover from disasters such as hurricanes through its Disaster Loan Program. SBA faced an unprecedented demand for disaster loan assistance following the 2005 Gulf Coast hurricanes (Katrina, Rita, and Wilma), which resulted in extensive property damage and loss of life. In the aftermath of these disasters, concerns were expressed regarding the timeliness of SBA's disaster assistance. GAO initiated work and completed two reports under the Comptroller General's authority to conduct evaluations and determine how well SBA provided victims of the Gulf Coast hurricanes with timely assistance. This testimony, which is based on these two reports, discusses (1) challenges SBA experienced in providing victims of the Gulf Coast hurricanes with timely assistance, (2) factors that contributed to these challenges, and (3) steps SBA has taken since the Gulf Coast hurricanes to enhance its disaster preparedness. GAO visited the Gulf Coast region, reviewed SBA planning documents, and interviewed SBA officials. GAO identified several significant system and logistical challenges that SBA experienced in responding to the Gulf Coast hurricanes that undermined the agency's ability to provide timely disaster assistance to victims. For example, the limited capacity of SBA's automated loan processing system--the Disaster Credit Management System (DCMS)--restricted the number of staff who could access the system at any one time to process disaster loan applications. In addition, SBA staff who could access DCMS initially encountered multiple system outages and slow response times in completing loan processing tasks. SBA also faced challenges training and supervising the thousands of mostly temporary employees the agency hired to process loan applications and obtaining suitable office space for its expanded workforce. As of late May 2006, SBA processed disaster loan applications, on average, in about 74 days compared with its goal of within 21 days. While the large volume of disaster loan applications that SBA received clearly affected its capacity to provide timely disaster assistance to Gulf Coast hurricane victims, GAO's two reports found that the absence of a comprehensive and sophisticated planning process beforehand likely limited the efficiency of the agency's initial response. For example, in designing the capacity of DCMS, SBA primarily relied on historical data such as the number of loan applications that the agency received after the 1994 Northridge, California, earthquake--the most severe disaster that the agency had previously encountered. SBA did not consider disaster scenarios that were more severe or use the information available from disaster simulations (developed by federal agencies) or catastrophe models (used by insurance companies to estimate disaster losses). SBA also did not adequately monitor the performance of a DCMS contractor or completely stress test the system prior to its implementation. Moreover, SBA did not engage in comprehensive disaster planning prior to the Gulf Coast hurricanes for other logistical areas, such as workforce planning or space acquisition, at either the headquarters or field office levels. While SBA has taken steps to enhance its capacity to respond to potential disasters, the process is ongoing and continued commitment and actions by agency managers are necessary. As of July 2006, SBA officials said that the agency had completed an expansion of DCMS's user capacity to support a minimum of 8,000 concurrent users as compared with 1,500 concurrent users supported for the Gulf Coast hurricanes. Further, in June 2007, SBA released a disaster plan. While GAO has not evaluated the process SBA followed in developing its plan, consistent with recommendations in GAO reports, the plan states that SBA is incorporating catastrophe models into its planning process, an effort which appears to be at an early stage. GAO encourages SBA to actively pursue the use of catastrophe models and other initiatives that may further enhance its capacity to better respond to future disasters. |
In November 2001, the Aviation and Transportation Security Act (ATSA) was enacted, requiring TSA to, among other things, work with airport operators to strengthen access control points to secured areas and to consider using biometric access control systems, or similar technologies, to verify the identity of individuals who seek to enter a secure airport area. In response to ATSA, TSA established the TWIC program in December 2001. In November 2002, MTSA was enacted and required the Secretary of Homeland Security to issue a maritime worker identification card that uses biometrics to control access to secure areas of maritime transportation facilities and vessels. In addition, the Security and Accountability For Every Port Act (SAFE Port Act) of 2006 amended MTSA and directed the Secretary of Homeland Security to, among other things, implement the TWIC pilot project to test TWIC use with biometric card readers and inform a future regulation on the use of TWIC with electronic readers. In requiring the issuance of transportation security cards for entry into secure areas of a facility or vessel as part of MTSA, Congress noted in the “Findings” section of the legislation that ports in the United States are a major location for federal crime such as cargo theft and smuggling, and are susceptible to large-scale acts of terrorism. For example, according to the Coast Guard’s January 2008 National Maritime Terrorism Threat Assessment, al Qaeda leaders and supporters have identified western maritime assets as legitimate targets. Moreover, according to the Coast Guard assessment, al Qaeda-inspired operatives are most likely to use vehicle bombs to strike U.S. cargo vessels, tankers, and fixed coastal facilities such as ports. Studies have demonstrated that attacks on ports could have serious consequences. For example, a study by the Center for Risk and Economic Analysis of Terrorist Events on the impact of a dirty bomb attack on the Ports of Los Angeles and Long Beach estimated that the economic consequences from a shutdown of the harbors due to the contamination could result in significant losses in the tens of billions of dollars, including the decontamination costs and the indirect economic impacts due to the port shutdown. As defined by DHS, the purpose of the TWIC program is to design and field a common credential for all transportation workers across the United States who require unescorted access to secure areas at MTSA-regulated maritime facilities and vessels. As such, the TWIC program, once implemented, aims to meet the following stated mission needs: Positively identify authorized individuals who require unescorted access to secure areas of the nation’s transportation system. Determine the eligibility of individuals to be authorized unescorted access to secure areas of the transportation system by conducting a security threat assessment. Ensure that unauthorized individuals are not able to defeat or otherwise compromise the access system in order to be granted permissions that have been assigned to an authorized individual. Identify individuals who fail to maintain their eligibility requirements subsequent to being permitted unescorted access to secure areas of the nation’s transportation system and immediately revoke the individual’s permissions. TSA is responsible for enrolling TWIC applicants and conducting background checks to ensure that only eligible individuals are granted TWICs. In addition, pursuant to TWIC-related regulations, MTSA- regulated facility and vessel operators are responsible for reviewing each individual’s TWIC as part of their decision to grant unescorted access to secure areas of their facilities. The Coast Guard is responsible for assessing and enforcing operator compliance with TWIC-related laws and regulations. Described below are key components of each process for ensuring TWIC-holder eligibility. Enrollment: Transportation workers are enrolled by providing biographic information, such as name, date of birth, and address, and proof of identity documents, and then being photographed and fingerprinted at enrollment centers by trusted agents. A trusted agent is a member of the TWIC team who has been authorized by the federal government to enroll transportation workers in the TWIC program and issue TWIC cards. Appendix I summarizes key steps in the enrollment process. Background checking: TSA conducts background checks on each worker who applies for a TWIC to ensure that individuals who enroll do not pose a security risk to the United States. A worker’s potential link to terrorism, criminal history, immigration status, and mental capacity are considered as part of the security threat assessment. Workers have the opportunity to appeal negative results of the threat assessment or request a waiver of certain specified criminal offenses, and immigration or mental capacity standards. Specifically, the TWIC background checking process includes two levels of review. First-level review: Initial automated background checking. The initial automated background checking process is conducted to determine whether any derogatory information is associated with the name and fingerprints submitted by an applicant during the enrollment process. This check is conducted against the FBI’s criminal history records. These records contain information from federal and state and local sources in the FBI’s National Crime Information Center (NCIC) database and the FBI’s Integrated Automated Fingerprint Identification System (IAFIS)/Interstate Identification Index (III), which maintain criminal records and related fingerprint submissions. Rather than positively confirming each individual’s identity using the submitted fingerprints, the FBI’s criminal history records check is a negative identification check, whereby the fingerprints are used to confirm that the associated individual is not on the FBI criminal history list. If an individual is identified as being on the FBI’s criminal history list, relevant information is to be forwarded to TSA for adjudication. The check is also conducted against federal terrorism information from the Terrorist Screening Database, including the Selectee and No-Fly Lists. To determine an applicant’s immigration/citizenship status and eligibility, TSA also runs applicant information against the Systematic Alien Verification for Entitlements (SAVE) system. If the applicant is identified as a U.S.-born citizen with no related derogatory information, the system can approve the issuance of a TWIC with no further review of the applicant or human intervention. Second-level review: TSA’s Adjudication Center Review. A second-level review is conducted as part of an individual’s background check if (1) the applicant has self-identified themselves to be a non-U.S. citizen or non-U.S.-born citizen or national, or (2) the first-level review uncovers any derogatory information. As such, not all TWIC applicants will be subjected to a second-level review. The second-level review consists of staff at TSA’s adjudication center reviewing the applicant’s enrollment file. Card use and compliance: Once a TWIC has been activated and issued, the worker may present his or her TWIC to security officials when he or she seeks unescorted access to a secure area. Currently, visual inspections of TWICs are required for controlling access to secure areas of MTSA- regulated facilities and vessels. Approaches for inspecting TWICs using biometric readers at individual facilities and vessels across the nation are being considered as part of a pilot but are not yet required. Pursuant to Coast Guard policy, Coast Guard inspectors are required to verify TWIC cards during annual compliance exams, security spot checks, and in the course of other Coast Guard duties as determined by the Captain of the Port based on risk and resource availability. The Coast Guard’s primary means of verification is shifting toward the use of biometric handheld readers with the continued deployment of readers to each of its Sectors and Marine Safety Units. As of December 21, 2010, the Coast Guard reports to have deployed biometric handheld readers to all of its 35 Sectors and 16 Marine Safety Units. In August 2006, DHS officials decided, based on industry comment, to implement TWIC through two separate regulations, or rules. The first rule, issued in January 2007, directs the use of the TWIC as an identification credential, or flashpass. The second rule, the card reader rule, is currently under development and is expected to address how the access control technologies, such as biometric card readers, are to be used for confirming the identity of the TWIC holder against the biometric information on the TWIC. On March 27, 2009, the Coast Guard issued an Advance Notice of Proposed Rule Making for the card reader rule. To inform the rulemaking process, TSA initiated a pilot in August 2008, known as the TWIC reader pilot, to test TWIC-related access control technologies. This pilot is intended to test the technology, business processes, and operational impacts of deploying TWIC readers at secure areas of the marine transportation system. As such, the pilot is expected to test the feasibility and functionality of using TWICs with biometric card readers within the maritime environment. After the pilot has concluded, a report on the findings of the pilot is expected to inform the development of the card reader rule. DHS currently estimates that a notice of proposed rulemaking will be issued late in calendar year 2011 and that the final rule will be promulgated no earlier than the end of calendar year 2012. According to agency officials, from fiscal years 2002 through 2010, the TWIC program had funding authority totaling $420 million. In issuing the credential rule, DHS estimated that implementing the TWIC program could cost the federal government and the private sector a combined total of between $694.3 million and $3.2 billion over a 10-year period. However, these figures did not include costs associated with implementing and operating readers. Appendix II contains additional program funding details. Standards for Internal Control in the Federal Government underscores the need for developing effective controls for meeting program objectives and complying with applicable regulations. Effective internal controls provide for an assessment of the risks the agency faces from both internal and external sources. Once risks have been identified, they should be analyzed for their possible effect. Management then has to decide upon the internal control activities required to mitigate those risks and achieve the objectives of efficient and effective operations. As part of this effort, management should design and implement internal controls based on the related cost and benefits. In addition, internal control standards highlight the need for the following: capturing information needed to meet program objectives; designing controls to assure that ongoing monitoring occurs in the course of normal operations; determining that relevant, reliable, and timely information is available for management decision-making purposes; conducting reviews and testing of development and modification activities before placing systems into operation; recording and communicating information to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities; and designing internal controls to provide reasonable assurance that compliance with applicable laws and regulations is being achieved, and provide appropriate supervisory review of activities to help provide oversight of operations. This includes designing and implementing appropriate supervisory review activities to help provide oversight and analyzing data to compare trends in actual performance to expected results to identify any areas that may require further inquiries or corrective action. Internal control also serves as the first line of defense in safeguarding assets and preventing and detecting errors and fraud. An internal control weakness is a condition within an internal control system worthy of attention. A weakness, therefore, may represent a perceived, potential, or real shortcoming, or an opportunity to strengthen internal controls to provide a greater likelihood that the entity’s objectives will be achieved. DHS has established a system of TWIC-related processes and controls. However, internal control weaknesses governing the enrollment, background checking, and use of TWIC potentially limit the program’s ability to provide reasonable assurance that access to secure areas of MTSA-regulated facilities is restricted to qualified individuals. Specifically, internal controls in the enrollment and background checking processes are not designed to provide reasonable assurance that (1) only qualified individuals can acquire TWICs; (2) adjudicators follow a process with clear criteria for applying discretionary authority when applicants are found to have extensive criminal convictions; or (3) once issued a TWIC, TWIC holders have maintained their eligibility. To meet the stated program mission needs, TSA designed TWIC program processes to facilitate the issuance of TWICs to maritime workers. However, TSA did not assess the internal controls designed and in place to determine whether they provided reasonable assurance that the program could meet defined mission needs for limiting access to only qualified individuals. Further, internal control weaknesses in TWIC enrollment, background checking, and use could have contributed to the breach of selected MTSA-regulated facilities during covert tests conducted by our investigators. DHS has established a system of TWIC-related processes and controls that as of April 2011 has resulted in TWICs being denied to 1,158 applicants based on a criminal offense, criminal immigration offense, or invalid immigration status. However, the TWIC program’s internal controls for positively identifying an applicant, arriving at a security threat determination for that individual, and approving the issuance of a TWIC, are not designed to provide reasonable assurance that only qualified applicants can acquire TWICs. Assuring the identity and qualifications of TWIC-holders are two of the primary benefits that the TWIC program is to provide MTSA-regulated facility and vessel operators making access control decisions. If an individual presents an authentic TWIC acquired through fraudulent means when requesting access to the secure areas of a MTSA-regulated facility or vessel, the cardholder is deemed not to be a security threat to the maritime environment because the cardholder is presumed to have met TWIC-related qualifications during a background check. In such cases, these individuals could better position themselves to inappropriately gain unescorted access to secure areas of a MTSA- regulated facility or vessel. As confirmed by TWIC program officials, there are ways for an unqualified individual to acquire an authentic TWIC. According to TWIC program officials, to meet the stated program purpose, TSA’s focus in designing the TWIC program was on facilitating the issuance of TWICs to maritime workers. However, TSA did not assess internal controls prior to implementing the program. Further, prior to fielding the program, TSA did not conduct a risk assessment of the TWIC program to identify program risks and the need for controls to mitigate existing risks and weaknesses, as called for by internal control standards. Such an assessment could help provide reasonable assurance that control weaknesses in one area of the program do not undermine the reliability of other program areas or impede the program from meeting mission needs. TWIC program officials told us that control weaknesses were not addressed prior to initiating the TWIC program because they had not previously identified them, or because they would be too costly to address. However, officials did not provide documentation to support their cost concerns and told us that they did not complete an assessment that accounted for whether the program could achieve defined mission needs without implementing additional or compensating controls to mitigate existing risks, or the risks associated with not correcting for existing internal control weaknesses. Our investigators conducted covert tests at enrollment center(s) to help test the rigor of the TWIC enrollment and background checking processes. The investigators fully complied with the enrollment application process. They were photographed and fingerprinted, and asserted themselves to be U.S.-born citizens. The investigators were successful in obtaining authentic TWIC cards despite going through the background-checking process. Not having internal controls designed to provide reasonable assurance that the applicant has (1) been positively identified, and (2) met all TWIC eligibility requirements, including not posing a security threat to MTSA-regulated facilities and vessels, could have contributed to the investigators’ successes. Specifically, we identified internal control weaknesses in the following three areas related to ensuring that only qualified applicants are able to obtain a TWIC. Controls to identify the use of potentially counterfeit identity documents are not used to inform background checking processes. As part of TWIC program enrollment, a trusted agent is to review identity documents for authenticity and use an electronic authentication device to assess the likelihood of the document being counterfeit. According to TWIC program officials, the trusted agent’s review of TWIC applicant identity documents and the assessment provided by the electronic authentication device are the two steps intended to serve as the primary controls for detecting whether an applicant is presenting counterfeit identity documents. Additionally, the electronic device used to assess the authenticity of identification credentials renders a score on the likelihood of the document being authentic and produces an assessment report in support of the score. Assessing whether the applicant’s credential is authentic is one source of information for positively identifying an applicant. Our investigators provided counterfeit or fraudulently acquired documents, but they were not detected. However, the TWIC program’s background checking processes are not designed to routinely consider the results of controls in place for assessing whether an applicant’s identity documents are authentic. For example, assessments of document authenticity made by a trusted agent or the electronic document authentication device as part of the enrollment process are not considered as part of the first-level background check. Moreover, TWIC program officials agree that this is a program weakness. As of December 1, 2010, approximately 50 percent of TWICs were approved after the first-level background check without undergoing further review. As an initial step towards addressing this weakness, and in response to our review, TWIC program officials told us that since April 17, 2010, the comments provided at enrollment by trusted agents have been sent to the Screening Gateway—a TSA system for aggregating threat assessment data. However, this change in procedure does not correct the internal control weaknesses we identified. Attempts to authenticate copies of documents are limited because it is not possible to capture all of the security features when copies of the identity documents are recorded, such as holograms or color-shifting ink. Using information on the authenticity of identity documents captured during enrollment to inform the background check could help TSA better assess the reliability and authenticity of such documents provided at enrollment. Controls related to the legal status of self-reported U.S.-born citizens or nationals. The TWIC program does not require that applicants claiming to be U.S.-born citizens or nationals provide identity documents that demonstrate proof of citizenship, or lawful status in the United States. See appendix III for the list of documents U.S.-born citizens or nationals must select from and present when applying for a TWIC. For example, an applicant could elect to provide one document, such a U.S. passport, which, according to TSA officials, serves as proof of U.S. citizenship or proof of nationality. However, an applicant could elect to submit documents that do not provide proof of citizenship. As of December 1, 2010, nearly 86 percent of approved TWIC enrollments were by self-identified United States citizens or nationals asserting that they were born in the United States or a United States territory. Verifying a U.S.-born citizen’s identity and related lawful status can be costly and is a challenge faced by U.S. government programs such as passports. However, reaching an accurate determination of a TWIC applicant’s potential security threat in meeting TWIC mission needs is dependant on positively identifying the applicant. Given such potential cost constraints, consistent with internal control standards, identifying alternative mechanisms to positively identify individuals to the extent that the benefits exceed the costs and TWIC program mission needs are met could enhance TSA’s ability to positively identify individuals and reduce the likelihood that criminals or terrorists could acquire a TWIC fraudulently. Controls are not in place to determine whether an applicant has a need for a TWIC. Regulations governing the TWIC program security threat assessments require applicants to disclose their job description and location(s) where they will most likely require unescorted access, if known, and the name, telephone number, and address of the applicant’s current employer(s) if the applicant works for an employer that requires a TWIC. However, TSA enrollment processes do not require that this information be provided by applicants. For example, when applying for a TWIC, applicants are to certify that they may need a TWIC as part of their employment duties. However, the enrollment process does not request information on the location where the applicant will most likely require unescorted access, and enrollment processes include asking the applicant if they would like to provide employment information, but informing the applicant that employer information is not required. While not a problem prior to implementing the TWIC program, according to TSA officials, a primary reason for not requiring employer information be captured by applicant processes is that many applicants do not have employers, and that many employers will not accept employment applications from workers who do not already have a TWIC. However, TSA could not provide statistics on (1) how many individuals applying for TWICs were unemployed at the time of their application; or (2) a reason why the TWIC-related regulation does not prohibit employers from denying employment to non-TWIC holders who did not previously have a need for a TWIC. Further, according to TSA and Coast Guard officials, industry was opposed to having employment information verified as part of the application process, as industry representatives believed such checks would be too invasive and time-consuming. TSA officials further told us that confirming this information would be too costly. We recognize that implementing mechanisms to capture this information could be time-consuming and involve additional costs. However, collecting information on present employers or operators of MTSA-regulated facilities and vessels to be accessed by the applicant, to the extent that the benefits exceed the costs and TWIC program mission needs are met, could help ensure TWIC program mission needs are being met, and serve as a barrier to individuals attempting to acquire an authentic TWIC through fraudulent means. Therefore, if TSA determines that implementing such mechanisms are, in fact, cost prohibitive, identifying and implementing appropriate compensating controls could better position TSA to positively identify the TWIC applicant. Not taking any action increases the risk that individuals could gain unescorted access to secure areas of MTSA- regulated facilities and vessels. As of September 2010, TSA’s background checking process had identified no instances of nonimmigration-related document or identity fraud. This is in part because of previously discussed weaknesses in TWIC program controls for positively identifying applicants, and the systems and procedures the TWIC program relies on not being designed to effectively monitor for such occurrences, in accordance with internal control standards. Though not an exhaustive list, through a review of Coast Guard reports and publicly available court records, we identified five court cases where the court documents indicate that illegal immigrants acquired, or in one of the cases sought to acquire, an authentic TWIC through fraudulent activity such as providing fraudulent identity information and, in at least one of the cases and potentially up to four, used the TWIC to access secure areas of MTSA-regulated facilities. Four of these cases were a result of, or involved, United States Immigration and Customs Enforcement efforts after individuals had acquired, or sought to acquire, a TWIC. As of September 2010, the program’s background checking process identified 18 instances of potential fraud out of the approximately 1,676,000 TWIC enrollments. These instances all involved some type of fraud related to immigration. The 18 instances of potential fraud were identified because the 18 individuals asserted themselves to be non-U.S.- born applicants and, unlike processes in place for individuals asserting to be U.S.-born citizens, TSA’s background checking process includes additional controls to validate such individuals’ identities. For example, TSA requires that at least one of the documents provided by such individuals at enrollment show proof of their legal status and seeks to validate each non-U.S.-born applicant’s identity with the U.S. Citizenship and Immigration Services. Internal control standards highlight the need for capturing information needed to meet program objectives; ensuring that relevant, reliable, and timely information is available for management decision-making purposes; and providing reasonable assurance that compliance with applicable laws and regulations is being achieved. Conducting a control assessment of the TWIC program’s processes to address existing weaknesses could enhance the TWIC program’s ability to prevent and detect fraud and positively identify TWIC applicants. Such an assessment could better position DHS in strengthening the program to ensure it achieves its objectives in controlling access to MTSA-regulated facilities and vessels. Being convicted of a felony does not automatically disqualify a person from being eligible to receive a TWIC; however, prior convictions for certain crimes are automatically disqualifying. Threat assessment processes for the TWIC program include conducting background checks to determine whether each TWIC applicant poses a security threat. Some of these offenses, such as espionage or treason, would permanently disqualify an individual from obtaining a TWIC. Other offenses, such as murder or the unlawful possession of an explosive device, while categorized as permanent disqualifiers, are also eligible for a waiver under TSA regulations and might not permanently disqualify an individual from obtaining a TWIC if TSA determines upon subsequent review that an applicant does not represent a security threat. Table 1 presents examples of disqualifying criminal offenses set out in statute and implementing regulations for consideration as part of the adjudication process. TSA also has the authority to add to or modify the list of interim disqualifying crimes. Further, in determining whether an applicant poses a security threat, TSA officials stated that adjudicators have the discretion to consider the totality of an individual’s criminal record, including criminal offenses not defined as a permanent or interim disqualifying criminal offenses, such as theft or larceny. More specifically, TSA’s implementing regulations provide, in part, that with respect to threat assessments, TSA may determine that an applicant poses a security threat if the search conducted reveals extensive foreign or domestic criminal convictions, a conviction for a serious crime not listed as a permanent or interim disqualifying offense, or a period of foreign or domestic imprisonment that exceeds 365 consecutive days. Thus, if a person was convicted of multiple crimes, even if each of the crimes were not in and of themselves disqualifying, the number and type of convictions could be disqualifying. Although TSA has the discretion and authority to consider criminal offenses not defined as a disqualifying offense, such as larceny and theft, and periods of imprisonment, TSA has not developed a definition for what extensive foreign or domestic criminal convictions means, or developed guidance to ensure that adjudicators apply this authority consistently in assessing the totality of an individual’s criminal record. For example, TSA has not developed guidance or benchmarks for adjudicators to consistently apply when reviewing TWIC applicants with extensive criminal convictions but no disqualifying offense. This is particularly important given TSA’s reasoning for including this authority in TWIC- related regulation. Specifically, TSA noted that it understands that the flexibility this language provides must be used cautiously and on the basis of compelling information that can withstand judicial review. They further noted that the decision to determine whether an applicant poses a threat under this authority is largely a subjective judgment based on many facts and circumstances. While TSA does not track metrics on the number of TWICs provided to applicants with specific criminal offenses not defined as disqualifying offenses, as of September 8, 2010, the agency reported 460,786 cases where the applicant was approved, but had a criminal record based on the results from the FBI. This represents approximately 27 percent of individuals approved for a TWIC at the time. In each of these cases, the applicant had either a criminal offense not defined as a disqualifying offense or an interim disqualifying offense that was no longer a disqualification based on conviction date or the applicant’s release date from incarceration. Consequently, based on TSA’s background checking procedures, all of these cases would have been reviewed by an adjudicator for consideration as part of the second-level background check because derogatory information had been identified. As such, each of these cases had to be examined and a judgment had to be made as to whether to deny an applicant a TWIC based on the totality of the offenses contained in each applicant’s criminal report. While there were 460,786 cases where the applicant was approved, but had a criminal record, TSA reports to have taken steps to deny 1 TWIC applicant under this authority. However, in the absence of guidance for the application of this authority, it is not clear how TSA applied this authority in approving the 460,786 applications and denying the 1. Internal control standards call for controls and other significant events to be clearly documented in directives, policies, or manuals to help ensure operations are carried out as intended. According to TSA officials, the agency has not implemented guidance for adjudicators to follow on how to apply this discretion in a consistent manner because they are confident that the adjudicators would, based on their own judgment, identify all applicants where the authority to deny a TWIC based on the totality of all offenses should be applied. However, in the absence of criteria, we were unable to analyze or compare how the approximately 30 adjudicators who are assigned to the TWIC program at any given time made determinations about TWIC applicants with extensive criminal histories. Given that 27 percent of TWIC holders have been convicted of at least one nondisqualifying offense, defining what extensive criminal convictions means and developing guidance or criteria for how adjudicators should apply this discretionary authority could help provide TSA with reasonable assurance that applications are consistently adjudicated. Defining terms and developing guidance is consistent with internal control standards. DHS’s defined mission needs for TWIC include identifying individuals who fail to maintain their eligibility requirements once issued a TWIC, and immediately revoking the individual’s card privileges. Pursuant to TWIC- related regulations, an individual may be disqualified from holding a TWIC and be required to surrender the TWIC to TSA for failing to meet certain eligibility criteria related to, for example, terrorism, crime, and immigration status. However, weaknesses exist in the design of the TWIC program’s internal controls for identifying individuals who fail to maintain their eligibility that make it difficult for TSA to provide reasonable assurance that TWIC holders continue to meet all eligibility requirements. Controls are not designed to determine whether TWIC holders have committed disqualifying crimes at the federal or state level after being granted a TWIC. TSA conducts a name-based check of TWIC holders against federal wants and warrants on an ongoing basis. According to FBI and TSA officials, policy and statutory provisions hamper the program from running the broader FBI fingerprint-based check using the fingerprints collected at enrollment on an ongoing basis. More specifically, because the TWIC background check is considered to be for a noncriminal justice purpose, to conduct an additional fingerprint-based check as part of an ongoing TWIC background check, TSA would have to collect a new set of fingerprints from the TWIC- holder, if the prints are more than 1 year old, and submit those prints to the FBI each time they want to assess the TWIC-holder’s criminal history. According to TSA officials, it would be cost prohibitive to run the fingerprint-based check on an ongoing basis, as TSA would have to pay th FBI $17 .25 per check. Although existing policies may hamper TSA’s ability to check FBI-held fingerprint-based criminal history records for the TWIC program, TSA has not explored alternatives for addressing this weakness, such as informing facility and port operators of this weakness and identifying solutions for leveraging existing state criminal history information, where available. For instance, state maritime organizations may have other mechanisms at their disposal for helping to identify TWIC-holders who may no longer meet TWIC qualification requirements. Specifically, laws governing the maritime environment in New York and New Jersey provide for credentialing authorities being notified if licensed or registered longshoremen have been arrested. Further, other governing entities, such as the State of Florida and the Alabama State Port Authority, have access to state-based criminal records checks. While TSA may not have direct access to criminal history records, TSA could compensate for this control weakness, for example, by leveraging existing mechanisms available to maritime stakeholders across the country to better ensure that only qualified individuals retain TWICs. Controls are not designed to provide reasonable assurance that TWIC holders continue to meet immigration status eligibility requirements. If a TWIC holder’s stated period of legal presence in the United States is about to expire or has expired, the TWIC program does not request or require proof from TWIC holders to show that they continue to maintain legal presence in the United States. Additionally, although they have the regulatory authority to do so, the program does not issue TWICs for a term less than 5 years to match the expiration of a visa. Instead, TSA relies on (1) TWIC holders to self-report if they no longer have legal presence in the country, and (2) employers to report if a worker is no longer legally present in the country. As we have previously reported, government programs for granting benefits to individuals face challenges in confirming an individual’s immigration status. TWIC program officials stated that the program uses a United States Citizenship and Immigration Services system during the background checking process prior to issuing a TWIC as a method for confirming the legal status of non-U.S. citizens. TSA has not, however, consistent with internal control standards, implemented alternative controls to compensate for this limitation and provide reasonable assurance that TWIC holders remain eligible. For instance, the TWIC program has not compensated for this limitation by (1) using its authority to issue TWICs with shorter expiration dates to correspond with each individual’s legal presence, or (2) updating the TWIC system to systematically suspend TWIC privileges for individuals who no longer meet immigration eligibility requirements until they can provide evidence of continued legal presence. TWIC program officials stated that implementing these compensating measures would be too costly, but they have not conducted an assessment to identify the costs of implementing these controls, or determined if the benefits of mitigating related security risks would outweigh those costs, consistent with internal control standards. Not implementing such measures could result in a continued risk of individuals no longer meeting TWIC legal presence requirements continuing to hold a federally issued identity document and gaining unescorted access to secure areas of MTSA- regulated facilities and vessels. Thus, implementing compensating measures, to the extent that the benefits outweigh the costs and meet the program’s defined mission needs, could provide TSA, the Coast Guard, and MTSA-regulated stakeholders with reasonable assurance that each TWIC holder continues to meet TWIC-related eligibility requirements. As of January 7, 2011, the Coast Guard reports that it has identified 11 known attempts to circumvent TWIC requirements for gaining unescorted access to MTSA-regulated areas by presenting counterfeit TWICs. The Coast Guard further reports to have identified 4 instances of individuals presenting another person’s TWIC as their own in attempts to gain access. Further, our investigators conducted covert tests to assess the use of TWIC as a means for controlling access to secure areas of MTSA-regulated facilities. During covert tests of TWIC at several selected ports, our investigators were successful in accessing ports using counterfeit TWICs, authentic TWICs acquired through fraudulent means, and false business cases (i.e., reasons for requesting access). Our investigators did not gain unescorted access to a port where a secondary port specific identification was required in addition to the TWIC. In response to our covert tests, TSA and Coast Guard officials stated that, while a TWIC card is required for gaining unescorted access to secure areas of a MTSA-regulated facility, the card alone is not sufficient. These officials stated that the cardholder is also required to present a business case, which security officials at facilities must consider as part of granting the individual access. In addition, according to DHS’s Screening Coordination Office, a credential is only one layer of a multilayer process to increase security. Other layers of security might include onsite law enforcement, security personnel, cameras, locked doors and windows, alarm systems, gates, and turnstiles. Thus, a weakness in the implementation of TWIC will not guarantee access to the secure areas of a MTSA-regulated port or facility. However, as our covert tests demonstrated, having an authentic TWIC and a legitimate business case were not always required in practice. The investigators’ possession of TWIC cards provided them with the appearance of legitimacy and facilitated their unescorted entry into secure areas of MTSA-regulated facilities and ports at multiple locations across the country. If individuals are able to acquire authentic TWICs fraudulently, verifying the authenticity of these cards with a biometric reader will not reduce the risk of undesired individuals gaining unescorted access to the secure areas of MTSA-regulated facilities and vessels. Given existing internal control weaknesses, conducting a control assessment of the TWIC program’s processes to address existing weaknesses could enhance the TWIC program’s ability to prevent and detect fraud and positively identify TWIC applicants. Such an assessment could better position DHS in strengthening the program to ensure it achieves its objectives in controlling unescorted access to MTSA-regulated facilities and vessels. It could also help DHS identify and implement the minimum controls needed to (1) positively identify individuals, (2) provide reasonable assurance that control weaknesses in one area of the program would not undermine the reliability of other program areas or impede the program from meeting mission needs, and (3) provide reasonable assurance that the threat assessments are based on complete and accurate information. Such actions would be consistent with internal control standards, which highlight the need for capturing information needed to meet program objectives; determining that relevant, reliable, and timely information is available for management decision-making purposes; and designing internal controls to provide reasonable assurance that compliance with applicable laws and regulations is being achieved, as part of implementing effective controls. Moreover, our prior work on internal controls has shown that management should design and implement internal controls based on the related costs and benefits and continually assess and evaluate its internal controls to assure that the controls being used are effective and updated when necessary. The TWIC program is intended to improve maritime security by using a federally sponsored credential to enhance access controls to secure areas at MTSA-regulated facilities and vessels, but DHS has not assessed the program’s effectiveness at enhancing security. In addition, Coast Guard’s approach for monitoring and enforcing TWIC compliance nationwide could be improved by enhancing its collection and assessment of related maritime security information. For example, the Coast Guard tracks TWIC program compliance, but the processes involved in the collection, cataloguing, and querying of information cannot be relied on to produce the management information needed to assess trends in compliance with the TWIC program or associated vulnerabilities. DHS asserted in its 2009 and 2010 budget submissions that the absence of the TWIC program would leave America’s critical maritime port facilities vulnerable to terrorist activities. However, to date, DHS has not assessed the effectiveness of TWIC at enhancing security or reducing risk for MTSA-regulated facilities and vessels. Such assessments are consistent with DHS’s National Infrastructure Protection Plan, which recognizes that metrics and other evaluation procedures should be used to measure progress and assess the effectiveness of programs designed to protect key assets. Further, DHS has not demonstrated that TWIC, as currently implemented and planned with readers, is more effective than prior approaches used to limit access to ports and facilities, such as using facility specific identity credentials with business cases. According to TSA and Coast Guard officials, because the program was mandated by Congress as part of MTSA, DHS did not conduct a risk assessment to identify and mitigate program risks prior to implementation. Further, according to these officials, neither the Coast Guard nor TSA analyzed the potential effectiveness of TWIC in reducing or mitigating security risk— either before or after implementation—because they were not required to do so by Congress. Rather, DHS assumed that the TWIC program’s enrollment and background checking procedures were effective and would not allow unqualified individuals to acquire and retain authentic TWICs. The internal control weaknesses that we discuss earlier in the report, as well as the results of our covert tests of TWIC use, raise questions about the effectiveness of the TWIC program. According to the Coast Guard official responsible for conducting assessments of maritime risk, it may now be possible to assess TWIC effectiveness and the extent to which, or if, TWIC use could enhance security using current Maritime Security Risk Analysis Model (MSRAM) data. Since MSRAM’s deployment in 2005, the Coast Guard has used its MSRAM to help inform decisions on how to best secure our nation’s ports and how to best allocate limited resources to reduce terrorist risks in the maritime environment. Moreover, as we have previously reported, Congress also needs information on whether and in what respects a program is working well or poorly to support its oversight of agencies and their budgets, and agencies’ stakeholders need performance information to accurately judge program effectiveness. Conducting an effectiveness assessment that evaluates whether use of TWIC in its present form and planned use with readers would enhance the posture of security beyond efforts already in place given costs and program risks could better position DHS and policymakers in determining the impact of TWIC on enhancing maritime security. Further, pursuant to executive branch requirements, prior to issuing a new regulation, agencies are to conduct a regulatory analysis, which is to include an assessment of costs, benefits, and associated risks. Prior to issuing the regulation on implementing the use of TWIC as a flashpass, DHS conducted a regulatory analysis, which asserted that TWIC would increase security. The analysis included an evaluation of the costs and benefits related to implementing TWIC. However, DHS did not conduct a risk-informed cost-benefit analysis that considered existing security risks. For example, the analysis did not account for the costs and security risks associated with designing program controls to prevent an individual from acquiring an authentic TWIC using a fraudulent identity and limiting access to secure areas of MTSA-regulated facilities and vessels to those with a legitimate need, in accordance with stated mission needs. As a proposed regulation on the use of TWIC with biometric card readers is under development, DHS is to issue a new regulatory analysis. Conducting a regulatory analysis using the information from the internal control and effectiveness assessments as the basis for evaluating the costs, benefits, security risks, and needed corrective actions could better inform and enhance the reliability of the new regulatory analysis. Moreover, these actions could help DHS identify and assess the full costs and benefits of implementing the TWIC program in a manner that will meet stated mission needs and mitigate existing security risks, and help ensure that the TWIC program is more effective and cost-efficient than existing measures or alternatives at enhancing maritime security. Internal control standards state that (1) internal controls should be designed to ensure that ongoing monitoring occurs in the course of normal operations, and (2) information should be communicated in a form and within a time frame that enables management to carry out its internal control responsibilities. Further, our prior work has stated that Congress also needs information on whether and in what respects a program is working well or poorly to support its oversight of agencies and their budgets, and agencies’ stakeholders need performance information to accurately judge program effectiveness. The Coast Guard uses its Marine Information for Safety and Law Enforcement (MISLE) database to meet these needs by recording activities related to MTSA-regulated facility and vessel oversight, including observations of TWIC-related deficiencies. The purpose of MISLE is to provide the capability to collect, maintain, and retrieve information necessary for the administration, management, and documentation of Coast Guard activities. In February 2008, we reported that flaws in the data in MISLE limit the Coast Guard’s ability to accurately portray and appropriately target oversight activities. In accordance with Coast Guard policy, Coast Guard inspectors are required to verify TWIC cards during annual compliance exams and security spot checks, and may do so in the course of other Coast Guard duties. As part of each inspection, Coast Guard inspectors are, among other things, to: (1) ensure that the card is authentic by examining it to visually verify that it has not been tampered with; (2) verify identity by comparing the photograph on the card with the TWIC holder to ensure a match; (3) check the card’s physical security features; and (4) ensure the TWIC is valid—a check of the card’s expiration date. Additionally, Coast Guard inspectors are to assess the proficiency of facility and vessel security personnel in complying with TWIC requirements through various means including oral examination, actual observation, and record review. Coast Guard inspectors randomly select workers to check their TWICs during inspections. The number of TWIC cards checked is left to the discretion of the inspectors. As of December 17, 2010, according to Coast Guard data, 2,135 facilities have undergone at least 2 MTSA inspections as part of annual compliance exams and spot checks. In reviewing the Coast Guard’s records of TWIC- related enforcement actions, we found that, in addition to verifying the number of inspections conducted, the Coast Guard is generally positioned to verify that TWIC cards are being checked by Coast Guard inspectors and, of the card checks that are recorded, the number of cardholders who are compliant and noncompliant. For instance, the Coast Guard reported inspecting 129,464 TWIC holders’ cards from May 2009 through January 6, 2011. The Coast Guard reported that 124,203 of the TWIC holders, or 96 percent, were found to be compliant—possessed a valid TWIC. However, according to Coast Guard officials, local Coast Guard inspectors may not always or consistently record all inspection attempts. Consequently, while Coast Guard officials told us that inspectors verify TWICs as part of all security inspections, the Coast Guard could not reliably provide the number of TWICs checked during each inspection. Since the national compliance deadline in April 2009 requiring TWIC use at MTSA-regulated facilities and vessels, the Coast Guard has not identified major concerns with TWIC implementation nationally. However, while the Coast Guard uses MISLE to track program compliance, because of limitations in the MISLE system design, the processes involved in the collection, cataloguing, and querying of information cannot be relied upon to produce the management information needed to assess trends in compliance with the TWIC program or associated vulnerabilities. For instance, when inspectors document a TWIC card verification check, the system is set up to record the number of TWICs reviewed for different types of workers and whether the TWIC holders are compliant or noncompliant. However, other details on TWIC-related deficiencies, such as failure to ensure that all facility personnel with security duties are familiar with all relevant aspects of the TWIC program and how to carry them out, are not recorded in the system in a form that allows inspectors or other Coast Guard officials to easily and systematically identify that a deficiency was related to TWIC. For example, from January 2009 through December 2010, the Coast Guard reported issuing 145 enforcement actions as a result of annual compliance exams or security spot checks at the 2,135 facilities that have undergone the inspections. These included 57 letters of warning, 40 notices of violation, 32 civil penalties, and 16 operations controls (suspension or restriction of operations). However, it would be labor-intensive for the Coast Guard to identify how many of the 57 letters of warning or 40 notices of violation were TWIC related, according to a Coast Guard official responsible for TWIC compliance, because there is not an existing query designed to extract this information from the system. Someone would have to manually review each of the 97 inspection reports in the database indicating either a letter of warning or a notice of violation to verify whether or not the deficiencies were TWIC related. As such, the MISLE system is not designed to readily provide information that could help management measure and assess the overall level of compliance with the TWIC program or existing vulnerabilities. According to a Coast Guard official responsible for TWIC compliance, Coast Guard headquarters staff has not conducted a trend analysis of the deficiencies found during reviews and inspections and there are no other analyses they planned to conduct regarding enforcement until after readers are required to be used. According to the Coast Guard, it can generally identify the number of TWICs checked and recorded in the MISLE system. However, it cannot perform trend analysis of the deficiencies as it would like to do, as it requires additional information. In the interim, as of January 7, 2011, the Coast Guard reported deploying 164 handheld biometric readers nationally to units responsible for conducting inspections. These handheld readers are intended to be the Coast Guard’s primary means of TWIC verification. During inspections, Coast Guard inspectors use the card readers to electronically check TWICs inthree ways: (1) verification—a biometric one-to-one match of the fingerprint; (2) authentication—electronically confirming that the certificates on the credential are authentic; and (3) validation— electronically check the card against the “hotlist” of invalid or revoked cards. The Coast Guard believes that the use of these readers during inspections will greatly improve the effectiveness of enforcement efforts and enhance record keeping through the use of the readers’ logs. As a result of limitations in MISLE design and the collection and recording of inspection data, it will be difficult for the Coast Guard to identify trends nationwide in TWIC-related compliance, such as whether particular types of facilities or a particular region of the country have greater levels of noncompliance, on an ongoing basis. Coast Guard officials acknowledged these deficiencies and reported that they are in the process of making enhancements to the MISLE database and plan to distribute updated guidance on how to collect and input information into MISLE to the Captains of the Port. However, as of January 2011, the Coast Guard had not yet set a date for implementing these changes. Further, while this is a good first step, these enhancements do not address weaknesses related to the collection process and querying of MISLE information so as to facilitate the Coast Guard performing trend analysis of the deficiencies as part of its compliance reviews. By designing and implementing a cost- effective and practical method for collecting, cataloging, and querying TWIC-related compliance information, the Coast Guard could be better positioned to identify and assess TWIC-related compliance and enforcement trends, and to obtain management information needed to assess and understand existing vulnerabilities with the use of TWIC. As the TWIC program continues on the path to full implementation—with potentially billions of dollars needed to install TWIC card readers in thousands of the nation’s ports, facilities, and vessels at stake—it is important that Congress, program officials, and maritime industry stakeholders fully understand the program’s potential benefits and vulnerabilities, as well as the likely costs of addressing these potential vulnerabilities. Identified internal control weaknesses and vulnerabilities include weaknesses in controls related to preventing and detecting identity fraud, assessing the security threat that individuals with extensive criminal histories pose prior to issuing a TWIC, and ensuring that TWIC holders continue to meet program eligibility requirements. Thus, conducting an internal control assessment of the program by analyzing controls, identifying related weaknesses and risks, and determining cost- effective actions to correct or compensate for these weaknesses could better position DHS to provide reasonable assurance that control weaknesses do not impede the program from meeting mission needs. In addition, conducting an effectiveness assessment could help provide reasonable assurance that the use of TWIC enhances the posture of security beyond efforts already in place or identify the extent to which TWIC may possibly introduce security vulnerabilities because of the way it has been designed and implemented. This assessment, along with the internal controls assessment, could be used to enhance the regulatory analysis to be conducted as part of implementing a regulation on the use of TWIC with readers. More specifically, considering identified security risks and needed corrective actions as part of the regulatory analysis could provide insights on the full costs and benefits of implementing the TWIC program in a manner that will meet stated mission needs and mitigate existing security risks. This is important because, unlike prior access control approaches which allowed access to a specific facility, the TWIC potentially facilitates access to thousands of facilities once the federal government attests that the TWIC holder has been positively identified and is deemed not to be a security threat. Further, doing so as part of the regulatory analysis could better assure DHS, Congress, and maritime stakeholders that TWIC program security objectives will be met. Finally, by designing and implementing a cost-effective and practical method for collecting, cataloging, and querying TWIC-related compliance information, the Coast Guard could be better positioned to identify trends and to obtain management information needed to assess and understand existing vulnerabilities with the use of TWIC. To identify effective and cost-efficient methods for meeting TWIC program objectives, and assist in determining whether the benefits of continuing to implement and operate the TWIC program in its present form and planned use with readers surpass the costs, we recommend that the Secretary of Homeland Security take the following four actions: Perform an internal control assessment of the TWIC program by (1) analyzing existing controls, (2) identifying related weaknesses and risks, and (3) determining cost-effective actions needed to correct or compensate for those weaknesses so that reasonable assurance of meeting TWIC program objectives can be achieved. This assessment should consider weaknesses we identified in this report among other things, and include: strengthening the TWIC program’s controls for preventing and detecting identity fraud, such as requiring certain biographic information from applicants and confirming the information to the extent needed to positively identify the individual, or implementing alternative mechanisms to positively identify individuals; defining the term extensive criminal history for use in the adjudication process and ensuring that adjudicators follow a clearly defined and consistently applied process, with clear criteria, in considering the approval or denial of a TWIC for individuals with extensive criminal convictions not defined as permanent or interim disqualifying offenses; and identifying mechanisms for detecting whether TWIC holders continue to meet TWIC disqualifying criminal offense and immigration-related eligibility requirements after TWIC issuance to prevent unqualified individuals from retaining and using authentic TWICs. Conduct an effectiveness assessment that includes addressing internal control weaknesses and, at a minimum, evaluates whether use of TWIC in its present form and planned use with readers would enhance the posture of security beyond efforts already in place given costs and program risks. Use the information from the internal control and effectiveness assessments as the basis for evaluating the costs, benefits, security risks, and corrective actions needed to implement the TWIC program in a manner that will meet stated mission needs and mitigate existing security risks as part of conducting the regulatory analysis on implementing a new regulation on the use of TWIC with biometric card readers. Direct the Commandant of the Coast Guard to design effective methods for collecting, cataloguing, and querying TWIC-related compliance issues to provide the Coast Guard with the enforcement information needed to assess trends in compliance with the TWIC program and identify associated vulnerabilities. We provided a draft of the sensitive version of this report to the Secretary of Homeland Security for review and comment on March 18, 2011. DHS provided written comments on behalf of the Department, the Transportation Security Administration, and the United States Coast Guard, which are reprinted in full in appendix IV. In commenting on our report, DHS stated that it concurred with our four recommendations and identified actions planned or under way to implement them. While DHS did not take issue with the results of our work, DHS did provide new details in its response that merit additional discussion. First, DHS noted that it is working to strengthen controls around applicant identity verification in TWIC, but that document fraud is a vulnerability to credential-issuance programs across the federal government, state and local governments, and the private sector. DHS further noted that a governmentwide infrastructure does not exist for information sharing across all entities that issue documents that other programs, such as TWIC, use to positively authenticate an individual’s identity. We acknowledge that such a government-wide infrastructure does not exist, and, as discussed in our report, recognize that there are inherent weaknesses in relying on identity documents alone to confirm an individual’s identity. However, positively identifying individuals—or confirming their identity—and determining their eligibility for a TWIC is a key stated program goal. Issuing TWICs to individuals without positively identifying them and subsequently assuring their eligibility could, counter to the program’s intent, create a security vulnerability. While we recognize that additional costs could be imposed by requiring positive identification checks, taking actions to strengthen the existing identity authentication process, such as only accepting documents that TSA can and does confirm to be authentic with the issuing agency, and verifying an applicant’s business need, could enhance TWIC program efforts to prevent and detect identity fraud and enhance maritime security. Second, DHS stated that it is working to continually verify TWIC-holder eligibility after issuance but also noted the limitations in the current process. While TSA does receive some criminal history records information when it sends fingerprints to the FBI, the information is not provided recurrently, nor is the information necessarily complete. DHS stated that to provide the most robust recurrent vetting against criminal records, TSA would need access to additional state and federal systems, and have additional authority to do so. As we reported, FBI and TWIC officials stated that because the TWIC background check is considered to be for a noncriminal justice purpose, policy and statutory provisions hamper the program from running the broader FBI fingerprint-based check using the fingerprints collected at enrollment on an ongoing basis. However, we continue to believe that TSA could compensate for this weakness by leveraging existing mechanisms available to maritime stakeholders. For example, other governing entities—such as the Alabama State Port Authority—that have an interest in ensuring the security of the maritime environment, might be willing to establish a mechanism for independently sharing relevant information when warranted. Absent efforts to leverage available information sources, TSA may not be successful in tempering existing limitations. Lastly, DHS sought clarification on the reporting of our investigators’ success at breaching security at ports during covert testing. Specifically, in its comments, DHS noted that it believes that our report’s focus on access to port areas rather than access to individual facilities can be misleading. DHS noted that we do not report on the number of facilities that our investigators attempted to gain access to within each port area. DHS stated that presenting the breaches in terms of the number of port areas breached rather than the number of facilities paints a more troublesome picture of the actual breaches that occurred. We understand DHS’s concern but continue to believe that the results of our investigators’ work, as reported, fairly and accurately represents the results and significance of the work conducted. The goal of the covert testing was to assess whether or not weaknesses exist at ports with varying characteristics across the nation, not to define the pervasiveness of existing weaknesses by type of facility, volume, or other characteristic. Given the numerous differences across facilities and the lack of publicly available information and related statistics for each of the approximately 2,509 MTSA-regulated facilities, we identified covert testing at the port level to be the proper unit of analysis for our review and reporting purposes. Conducting a detailed assessment of the pervasiveness of existing weaknesses by type of facility, volume, or other characteristics as suggested by DHS would be a more appropriate tasking for the Coast Guard as part of its continuing effort to ensure compliance with TWIC-related regulations. In addition, with regard to covert testing, DHS further commented that the report does not distinguish among breaches in security using a counterfeit TWIC or an authentic TWIC card obtained with fraudulent documents. DHS noted that because there is no “granularity” with the report as to when a specific card was used, one can be left with the unsupported impression that individual facilities in all cases were failing to implement TWIC visual inspection requirements. For the above noted reason, we did not report on the results of covert testing at the facility level. However, our records show that use of counterfeit TWICs was successful for gaining access to more than one port where our investigators breached security. Our investigators further report that security officers never questioned the authenticity of TWICs presented for acquiring access. Our records show that operations at the locations our investigators breached included cargo, containers, and fuel, among others. In addition, TSA provided written technical comments, which we incorporated into the report, as appropriate. We are sending copies of this report to the Secretary of Homeland Security, the Assistant Secretary for the Transportation Security Administration, the Commandant of the United States Coast Guard, and appropriate congressional committees. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4379 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. Range based on a reduced fee of $105.25 per TWIC for workers with current, comparable background checks or a $132.50 fee per TWIC for those without. 010. ral security grant funding subtotal for fiscal year 2007 includes $19.2 million in fiscal year Port ecurity Grant Program funding, $10.8 million in supplemental funding, and $1.5 million in Transit S Security Grant Program funding. TWIC applicants who are possessions) and were born inside the United States (or its outlying possessions), must prov e one document from list A or two documents from list B. If two documents from list B are presented, at least one of them must be a government-iss issued driv r’s license, m ilitary citizens of the United States (or its outlying ued photo ID card, ntification, su state identificat . Listed below are criminal offenses that can prevent TWIC applicants from being issued a TWIC. Pursuant to TSA implementing regulations, permanent disqualifying offenses are offenses defined in 49 C.F.R. 1572.103(a). Permanent disqualifying offenses that can be waived are those offenses defined in 49 C.F.R. 1572.103(a) for which a waiver can be granted in accordance with 49 C.F.R. 1515.7(a)(i). Interim disqualifying offenses are offenses defined in 49 C.F.R. 1572.103(b) for which the applicant has either been (1) convicted, or found not guilty by reason of insanity, within a 7-year period preceding the TWIC application, or (2) incarcerated for within a 5-year period preceding the TWIC application. Applicants with certain permanent criminal offenses and all interim disqualifying criminal offenses may request a waiver of their disqualification. In general, TSA may issue such a waiver and grant a TWIC if TSA determines that an applicant does not pose a security threat based upon the security threat assessment. Permanent disqualifying criminal offenses for which no waiver may be granted. 1. Espionage, or conspiracy to commit espionage. 2. Sedition, or conspiracy to commit sedition. 3. Treason, or conspiracy to commit treason. 4. A federal crime of terrorism as defined in 18 U.S.C. 2332b(g), or comparable state law, or conspiracy to commit such crime. Permanent disqualifying criminal offenses for which a waiver may be granted. 1. A crime involving a transportation security incident. A transportation security incident is a security incident resulting in a significant loss of life, environmental damage, transportation system disruption, or economic disruption in a particular area, as defined in 46 U.S.C. § 70101. The term economic disruption does not include a work stoppage or other employee-related action not related to terrorism and resulting from an employer-employee dispute. 2. Improper transportation of a hazardous material under 49 U.S.C. § 5124, or a state law that is comparable. 3. Unlawful possession, use, sale, distribution, manufacture, purchase, receipt, transfer, shipping, transporting, import, export, storage of, or dealing in an explosive or explosive device. An explosive or explosive device includes, but is not limited to, an explosive or explosive material as defined in 18 U.S.C. §§ 232(5), 841(c) through 841(f), and 844(j); and a destructive device, as defined in 18 U.S.C. § 921(a)(4) and 26 U.S.C. § 5845(f). 4. Murder. 5. Making any threat, or maliciously conveying false information kno the same to be false, concerning the deliverance, placement, or detonation of an explosive or other lethal device in or against a place of public use, a state or government facility, a public transportations system, or an infrastructure facility. 6. Violations of the Racketeer Influenced and Corrupt Organizations A 18 U.S.C. § 1961, et seq. , or a comparable state law, where one of the predicate acts found by a jury or admitted by the defendant, consists one of the crimes listed in paragraph 49 C.F.R. § 1572.103 (a). 7. Attempt to commit the crimes in paragraphs listed under 49 C.F.R. § 1572.103 (a)(1) through (a)(4). 8. Conspiracy or attempt to commit the crimes in 49 C.F.R. § 1572.103 (a)(5) through (a)(10). The interim disqualifying felonies. 1. Unlawful possession, use, sale, manufacture, pu receipt, transfer, shipping, transporting, delivery dealing in a firearm or other weapon. A firea includes, but is not limited to, firearms as defined in 18 U.S.C. § 921(a)(3) or 26 U.S.C. § 5845(a), or items contained on the United States Munitions Import List at 27 CFR § 447.21. rchase, distribution, , import, export of, or . Extortion. 2 3. Dishonesty, fraud, or misrepresentation, including identity fraud and money laundering where the money laundering is related to a crime described bad checks do not constitute dishonesty, fraud, or misrepresentation for purposes of this paragraph. in 49 C.F.R. § 1572.103 (a) or (b). Welfare fraud and passing 4. Bribery. 5. Smuggling. 6. Immigration violations. 7. Distribution of, possession with intent to distribute, or importa controlled substance. 8. Arson. 9. Kidnapping or hostage taking. 10. Rape or aggravated sexual abuse. 11. Assault with intent to kill. 12. Robbery. 13. Fraudulent entry into a seaport as described in 18 U.S.C. § 1036, or a comparable state law. 14. Violations of the Racketeer Influenced and Corrupt Organizations Act, 18 U.S.C. § 1961, et seq., or a comparable state law, other than the violations listed in paragraph 49 C.F.R. § 1572.103 (a)(10). 15. Conspiracy or attempt to commit the interim disqualifying felonies. Appendix VII: GAO Contact and Staff Acknowledgments Error! No text of specified style in document. In addition to the contact named above, David Bruno (Assistant Director), Joseph P. Cruz, Scott Fletcher, Geoffrey Hamilton, Richard Hung, Lemuel Jackson, Linda Miller, Jessica Orr, and Julie E. Silvers made key contributions to this report. | Within the Department of Homeland Security (DHS), the Transportation Security Administration (TSA) and the U.S. Coast Guard manage the Transportation Worker Identification Credential (TWIC) program, which requires maritime workers to complete background checks and obtain a biometric identification card to gain unescorted access to secure areas of regulated maritime facilities. As requested, GAO evaluated the extent to which (1) TWIC processes for enrollment, background checking, and use are designed to provide reasonable assurance that unescorted access to these facilities is limited to qualified individuals; and (2) the effectiveness of TWIC has been assessed. GAO reviewed program documentation, such as the concept of operations, and conducted site visits to four TWIC centers, conducted covert tests at several selected U.S. ports chosen for their size in terms of cargo volume, and interviewed agency officials. The results of these visits and tests are not generalizable but provide insights and perspective about the TWIC program. This is a public version of a sensitive report. Information DHS deemed sensitive has been redacted. Internal control weaknesses governing the enrollment, background checking, and use of TWIC potentially limit the program's ability to provide reasonable assurance that access to secure areas of Maritime Transportation Security Act (MTSA)-regulated facilities is restricted to qualified individuals. To meet the stated program purpose, TSA designed TWIC program processes to facilitate the issuance of TWICs to maritime workers. However, TSA did not assess the internal controls designed and in place to determine whether they provided reasonable assurance that the program could meet defined mission needs for limiting access to only qualified individuals. GAO found that internal controls in the enrollment and background checking processes are not designed to provide reasonable assurance that (1) only qualified individuals can acquire TWICs; (2) adjudicators follow a process with clear criteria for applying discretionary authority when applicants are found to have extensive criminal convictions; or (3) once issued a TWIC, TWIC-holders have maintained their eligibility. Further, internal control weaknesses in TWIC enrollment, background checking, and use could have contributed to the breach of MTSA-regulated facilities during covert tests conducted by GAO's investigators. During covert tests of TWIC use at several selected ports, GAO's investigators were successful in accessing ports using counterfeit TWICs, authentic TWICs acquired through fraudulent means, and false business cases (i.e., reasons for requesting access). Conducting a control assessment of the TWIC program's processes to address existing weaknesses could better position DHS to achieve its objectives in controlling unescorted access to the secure areas of MTSA-regulated facilities and vessels. DHS has not assessed the TWIC program's effectiveness at enhancing security or reducing risk for MTSA-regulated facilities and vessels. Further, DHS has not demonstrated that TWIC, as currently implemented and planned, is more effective than prior approaches used to limit access to ports and facilities, such as using facility specific identity credentials with business cases. Conducting an effectiveness assessment that further identifies and assesses TWIC program security risks and benefits could better position DHS and policymakers to determine the impact of TWIC on enhancing maritime security. Further, DHS did not conduct a risk-informed cost-benefit analysis that considered existing security risks, and it has not yet completed a regulatory analysis for the upcoming rule on using TWIC with card readers. Conducting a regulatory analysis using the information from the internal control and effectiveness assessments as the basis for evaluating the costs, benefits, security risks, and corrective actions needed to implement the TWIC program, could help DHS ensure that the TWIC program is more effective and cost-efficient than existing measures or alternatives at enhancing maritime security. Among other things, GAO recommends that DHS assess TWIC program internal controls to identify needed corrective actions, assess TWIC's effectiveness, and use the information to identify effective and cost-efficient methods for meeting program objectives. DHS concurred with all of the recommendations. |
The permanent national government of Iraq was established by a constitutional referendum in October 2005, followed by election of the first Council of Representatives (Parliament) in December 2005 and the selection of the first Prime Minister, Nuri Kamal al-Maliki, in May 2006. By mid-2006, the cabinet had been approved, and the government now has 34 ministries responsible for providing security and essential services including electricity, water, and education for the Iraqi people (see fig. 1). The size of the ministries varies considerably in terms of staff numbers and budget. As of May 2007, the U.S. government ministry capacity development programs target 12 key ministries—10 civilian ministries are the focus of State and USAID programs, while the Ministries of Defense and Interior are targeted by DOD programs. These ministries contain 65 percent of the workforce and are responsible for 74 percent of the current budget (see table 1). According to U.S., international, and Coalition Provisional Authority (CPA) assessments and officials, years of neglect, a highly centralized decision-making system under the former regime, and looting in 2003 decimated Iraq’s government ministries. To address this problem, multiple U.S. agencies have conducted capacity development efforts at individual Iraqi ministries since 2003. The implementation of U.S. efforts to help build the capacity of the Iraqi national government over the past 4 years has been characterized by (1) multiple U.S. agencies leading individual efforts without overarching direction from a lead entity or strategic approach that integrates their efforts with Iraqi government priorities and (2) shifting timeframes and priorities in response to deteriorating security and U.S. embassy reorganization. State, through the U.S. Embassy Baghdad’s Iraq Reconstruction Management Office (IRMO) began implementing a number of 1-year projects intended to jump start capacity development in 2006 at the 10 civilian ministries designated as key to enabling the Iraqi government to sustain its reconstruction and deliver essential services to the Iraqi people. It also targeted other national level organizations, including the Prime Minister’s office and anticorruption entities. USAID focused primarily on implementing a medium-term effort to improve the public administration capabilities of the Iraqi government. DOD conducted relatively intensive capacity development efforts at the ministries of Defense and Interior. However, the lack of a lead entity to provide direction and an overall plan contributed to the three agencies developing separate metrics to assess and track the capacity levels of ministry functions common to all ministries and blurred the distinction between the efforts of USAID and IRMO. Since January 2007, moreover, capacity development efforts have been subject to changes in focus, agency roles, and organization, with the U.S. embassy and MNF-I seeking immediate improvements in ministry performance and results in areas such as budget execution. No single agency is in charge of leading and providing overall direction for U.S. ministry capacity development efforts. As of May 2007, six U.S. agencies were implementing about 53 projects at individual ministries and other national Iraqi agencies. State, USAID, and DOD are leading the largest number of programs with funding allocations totaling about $169 million at individual ministries and other national Iraqi government agencies. As of May 1, 2007, about 384 U.S. military, government, and contractor staff from these 3 agencies were working with the ministries and were implementing or completing capacity development projects. State advisory teams led by the embassy’s senior consultants were assisting capacity development efforts at the 10 key civilian ministries— the Ministries of Oil, Electricity, Planning, Water, Health, Finance, Justice, Municipalities and Public Works, Agriculture, and Education. These teams, ranging in size from 20 positions for the Ministry of Oil and 18 each for the Ministries of Finance and Electricity to 3 for the Ministry of Agriculture, typically interact with the minister, deputy minister, or department director levels, according to State officials. State also leads efforts to strengthen the capacity of three national Iraqi anticorruption entities—the Commission for Public Integrity (CPI), the Board of Supreme Audit (BSA), and the government’s 29 ministerial Inspectors General. As of early May 2007, State, through IRMO and its successor organizations in the Baghdad embassy, had 23 capacity development projects worth over $50 million completed, contracted, or under way. These projects ranged from supplying and installing news media equipment in the prime minister’s press center to providing subject matter experts to mentor, train, and assist Iraqi staff in their areas of expertise in the Ministries of Water and Electricity. See Appendix II for the list of State-led capacity development projects. USAID conducts a number of ministry capacity development efforts, primarily through its 3-year contract with Management Systems International, Inc. (MSI). For example, MSI’s Arabic-speaking staff provide public administration training and other support to the Ministry of Planning’s National Center for Consultancy and Management Development (NCCMD) and other regional civil service training centers, using a “train the trainer” approach. MSI has additional advisors working with the Council of Minister’s Secretariat and six ministries, to create, among other things, capacity development plans that will guide the development of public administration skills within the ministries. In addition to these medium-term projects, MSI trainers have supported USAID and embassy efforts to achieve more immediate improvements in ministry budgeting and procurement performance. As of June 2007, MSI had 34 international staff providing training to Iraqi government staff, according to a USAID official. USAID reported that MSI was also working with the Ministry of Planning to develop a pilot self- assessment process for possible future use by other ministries to identify their own capacity development needs and priorities. By July 2007, USAID reported that 855 Iraqi national government employees, including staff from all 10 key civilian ministries and the Ministry of Interior (MOI), had attended MSI-sponsored courses at Iraqi government training centers. They had been instructed in, among other things, budgeting, procurement, leadership and communications, information management, and anticorruption policies. Officials from three other ministries, the Prime Minister’s Office and the Council of Ministers’ Secretariat were also attending MSI courses. USAID also has had a governance program contract with BearingPoint since 2003, which includes a project worth about $8 million to implement the system and train government staff on the use of an electronic ledger to record government payment and revenue transactions called the Financial Management Information System (FMIS). FMIS is intended to serve as the primary financial transaction system for the entire Iraqi government. According to USAID and BearingPoint officials, BearingPoint’s Iraq staff had trained approximately 500 Iraqi government employees, as of February 2007, on how to use FMIS. The coalition’s Multinational Security Transition Command-Iraq (MNSTC- I) is leading a substantial effort to develop the capacity of the two security ministries. As of March 2007, the U.S.-led coalition had assigned 215 military, civilian, and contracting personnel to the Ministry of Defense (MOD) and MOI to advise Iraqi staff about establishing plans and policies, budgeting, and managing personnel and logistics, among other things. According to MNSTC-I advisors, they work with their Iraqi counterparts on a daily basis to develop policies, plans, and procedures. For example, a senior advisor to the joint staff worked with MOD staff to develop the counterinsurgency strategy. He provided them with a planning template, reviewed their work, and suggested they add details such as the source of the threat, the risk level, and the forces required to counter threats. The advisors are embedded with MOD staff from a number of offices, including Plans and Policies and the Iraqi Joint Staff. According to the senior U.S. budget advisor at MOD, he and his team work directly with the budget director and his staff to prepare budget spreadsheets and ensure that the departments justify their funding requests. MNSTC-I advisors were also working with Iraqi officials at MOI at all levels in the ministry, although they are not embedded in the ministry to the same degree as MNSTC-I’s MOD advisors. Among other efforts, these advisors are helping MOI develop processes for vetting Iraqi security forces, including collecting and storing biometric data; establishing an identification card system; and establishing a personnel management database that will house inventory, payroll, human resource, financial, and budget data. Table 2 provides additional details on State, USAID, and DOD efforts. Two factors help explain the lack of overall direction and a lead agency for U.S. capacity-development efforts. First, from their inception in 2003, U.S. efforts evolved without a plan for capacity development or the designation of a lead entity. Instead, U.S. agencies individually provided assistance to four successive governments in response to immediate needs, according to former CPA officials and senior advisors. In 2003, for example, the first programs at the ministries were initiated by the CPA’s senior advisors, who ran the ministries using U.S. funds and made personnel and budgetary decisions. According to State and former CPA officials, each senior advisor operated their ministries without an overall plan or overarching guidance; efforts to create an overall plan in late 2003 were dropped after the United States decided to transfer control of the ministries to a sovereign Iraq by mid-2004. In May 2004, the President issued National Security Presidential Directive 36, which delineated State and DOD responsibilities for the U.S. effort in Iraq. The directive made State, through Embassy Baghdad, responsible for all U.S civilian activities in Iraq, but gave DOD’s Central Command (CENTCOM) responsibility for security and military operations. However, the directive indicated that, at an appropriate time, overall leadership for all U.S. efforts to support the organizing, training, and equipping of Iraqi security forces would be transferred to a security assistance organization under State’s authority. A second factor has been the delay in acting on recommendations from a 2005 State assessment of U.S. efforts in Iraq. That assessment reported that an integrated approach was essential for the success of U.S. efforts in Iraq. The assessment noted that programs had been implemented in an uncoordinated and sometimes overlapping fashion and that their efforts had been fragmented, duplicative, and disorganized. In addition, this implementation had taken place without a clear understanding of the programs’ objectives or their contribution to the larger goal of transferring responsibility for reconstruction to the Iraqi government, according to USAID officials. Embassy documents and officials also stressed that the success of the program required the Iraqi government to take ownership of the capacity development effort. The assessment recommended a unified effort among State, DOD, and USAID, with the latter ultimately providing overall coordination and leadership. In late 2005, the U.S. mission initiated the National Capacity Development Program to address these concerns. However, instead of placing one agency in charge, the program divided responsibilities for capacity development among State, DOD, and USAID, with IRMO providing coordination. In particular, responsibility for building the capacity of MOI and MOD was given to the Multinational Security Transition Command- Iraq (MNSTC-I), which had previously taken action to advise and strengthen the MOI and help rebuild the MOD from scratch after the coalition disbanded it in 2003. Figure 2 illustrates the evolution of U.S. efforts to develop the Iraqi government over four successive governments. Since early 2007, the U.S. mission has made efforts to improve coordination among State, USAID, and DOD, such as the creation of the Joint Task Force on Capacity Development, the increased emphasis on efforts to help stabilize Iraq in the New Way Forward Strategy, and the creation of a joint State-DOD-USAID procurement action program to help the Iraqi government better execute its budgets. Nonetheless, the lack of a lead entity to provide direction and an overall plan contributes to the following issues: The agencies have developed separate sets of metrics. State, USAID, and DOD participated in an effort in late 2005 to develop a common set of metrics to measure the capacity of 10 key civilian and the 2 security ministries. The agencies completed an initial draft assessment and, according to USAID officials, planned to conduct a comprehensive survey to regularly track progress. However, this effort was abandoned, according to State and USAID officials, and State and DOD developed their own metrics. In mid-2006, MNF-I began monthly assessments of the capacity of the security ministries to perform nine key functions, such as planning, logistics, and budgeting. IRMO completed a baseline assessment of the key civilian ministries in August 2006, using a new, more detailed ministry capacity assessment that gauges a similar list of nine core functions, including the ministries’ ability to plan, budget, and stem corruption. IRMO officials stated that they intended to update this assessment quarterly to gauge Iraqi progress in developing this capacity. However, State officials noted that questions about the usefulness of this assessment delayed efforts to update it prior to the IRMO’s termination in May 2007, and the embassy subsequently dropped plans to continue this effort in July 2007. The distinction between the efforts of USAID and IRMO became blurred. IRMO began implementing short-term efforts to jump start capacity development in 2006 using reallocated money from the fiscal year 2004 Iraq Relief and Reconstruction Fund (IRRF2) and the fiscal year 2006 emergency supplemental fund. In the meantime, USAID identified longer- term capacity development needs and beginning in 2007 helped the Iraqi ministries devise a strategic plan to meet their capacity development needs, according to a USAID official. Most of State’s short-term efforts did not begin until the end of October 2006, after USAID began its capacity development programs under its medium-term contract, because of delays in the formation of the Iraqi government and in receiving fiscal year 2006 funding. Moreover, USAID officials stated that they began implementing a number of short-term efforts earlier than originally planned to address more immediate shortfalls in the Iraqi government’s capacity to plan and execute ministry budgets. Since January 2007, the emphasis of U.S. capacity development efforts has shifted in response to continued security problems and the reorganization of the embassy’s reconstruction and assistance offices. The President’s January 2007 strategic review called upon the United States and the coalition to “refocus efforts to help the Iraqis build capacity in areas vital to the success of the government” during the 2007 surge of additional U.S. forces into Baghdad and Iraq. Moreover, according to embassy officials, the new commander of MNF-I placed greater emphasis on ways to help the Iraqi government immediately demonstrate that it can perform key functions to help stabilize Iraq and deliver essential services. Finally, the expiration of IRMO has diffused responsibility for conducting and overseeing the capacity development program. In early 2007, the U.S. mission refocused their capacity development efforts as part of the surge strategy associated with the President’s New Way Forward proposal. Rather than focusing on 12 civilian and security ministries, IRMO and MNSTC-I began targeting vital functions requiring more immediate improvement—such as budget execution, procurement and contracting—at 6 ministries (MOI, MOD, Planning, Finance, Oil, and Electricity), plus the Prime Minister’s office and the Council of Ministers’ Secretariat. Furthermore, USAID’s contracted trainers at the Iraqi government’s NCCMD also attempted to address more immediate government needs by directly training middle- and upper-level ministry staff. In May 2007, the U.S. embassy established a procurement assistance program at the Ministry of Planning to address pressing procurement problems, assisted by a DOD-provided team of U.S. civilian procurement and contracting officials and Iraqi contractors. By June 2007, the U.S. embassy had identified efforts that could improve ministry performance by September 2007. The U.S. government’s efforts also have been affected by recent changes in the leadership and organization of the U.S. mission in Iraq. In February, the embassy created a new office of the Coordinator for Economic Transition in Iraq (CETI) to work with the deputy prime minister and other senior officials to improve budget execution and to coordinate U.S. capacity development efforts to improve ministry performance immediately. In addition, on May 8, 2007, the Iraq Transition Assistance Office (ITAO) succeeded IRMO. According to an embassy official, many of IRMO’s senior consultants now report directly to other embassy offices or working groups, while ITAO coordinates senior consultants at four ministries delivering essential services (Oil, Water, Electricity, and Communications). This official also noted that ITAO is not expected to manage any additional capacity development projects. In July, the U.S. government appointed an ambassador to oversee the embassy’s economic and assistance operations. This includes responsibility for supervising and coordinating all U.S. short and medium-term capacity development programs except for the training and security functions of MNSTC-I at the Ministries of Defense and Interior, and the Rule of Law Coordinator’s Office (which provides capacity development training for justice and law enforcement functions). State noted that he now oversees USAID, ITAO, and attachés from the Departments of Treasury, Energy, Agriculture, Health, Commerce and the embassy’s economic section. U.S. efforts to develop Iraqi ministerial capacity face four key challenges that pose a risk to their success and long-term sustainability. First, Iraqi ministries have significant shortages of personnel who can formulate budgets, procure goods and services, and perform other vital ministry tasks. Second, Iraqi efforts to build a professional and nonpartisan civil service are complicated by partisan influence over the leadership and staffing of the ministries and infiltration by sectarian militias or political parties hostile to the U.S. government. Third, although the Iraqi government has taken measures to improve the capacity of its anti- corruption entities with U.S. assistance, pervasive corruption impedes the effectiveness of U.S. efforts to develop ministry capacity. Fourth, numerous U.S. and coalition officials stated that the security situation remains a major obstacle to their efforts to help the Iraqis develop capacity in areas vital to the government’s success. Iraqi government institutions suffer from significant shortages of competent personnel with the skills to perform the vital tasks necessary to provide security and deliver essential services to the Iraqi people. According to State, CPA, and other U.S. government reports and officials, Iraq’s governing capacity has suffered from years of centralized control that led to the decay of core functions, such as strategic and policy planning, financial management, information technology, and human resources management. In neglecting the civil service for almost 30 years, the central government fostered poor management practices through incompetent staffing and leadership. Moreover, in 2003, the CPA removed Ba’athist party leaders from government and provided for the investigation and removal of even junior party members from upper-level management in government, universities, and hospitals. As a result, most of Iraq’s technocratic class was pushed out of government, according to the Iraq Study Group report. In 2005, a U.S. embassy document noted that the ministries lacked skilled mid-level managers who could make decisions. The dearth of skilled personnel complicated U.S. and international efforts to engage Iraqis in capacity development efforts, according to a number of State, DOD, USAID and international officials. On the other hand, the coalition’s involvement in their budgeting and procurement processes may have hindered the ministries’ capacity to improve their own procurement and contracting systems and perform other vital services, according to MNSTC-I and embassy officials. A September 2006 U.S. embassy assessment noted that the government had significant human resource shortfalls in most key civilian ministries. The majority of staff at all but one of the ministries surveyed were inadequately trained for their positions and a quarter of them relied heavily on foreign support to compensate for their human and capital resource shortfalls. According to a senior IRMO advisor, the Minister of Planning had only one of the three deputies he needed and did not delegate authority or tasks because the ministry lacked skilled staff. This lack of trained staff made it difficult for coalition personnel to find ministry staff to work on capacity development. For example, officials from USAID and its implementing partner for capacity development stated that one of the key challenges to their program’s success was the small pool of Iraqi government employees from which to draw willing or qualified participants. Moreover, UN officials stated that one key ministry had few staff available with whom to meet when they visited. Furthermore, U.S. advisors in the defense ministry stated that most Iraqi staff lacked basic computer and information technology skills and often avoided making decisions by referring problems to higher levels. The lack of trained staff has particularly hindered the ability of the key government ministries to develop and execute budgets. U.S. and international officials noted that the lack of competent staff contributed to poor budget execution rates among some of the key civilian ministries. While a U.S. Treasury assessment reported that 8 of 12 key ministries had spent more than half of their 2006 budgets by the end of December 2006, the entire national government had executed just 17 percent of its projected 2006 capital goods expenditures by the end of the year (see fig. 3). U.S. and coalition officials noted that the inability of the Iraqi government to execute its budget jeopardized the U.S. transition strategy and capacity development objectives and prompted U.S. officials to bypass ineffective Iraqi government procurement systems in order to procure equipment and supplies more quickly. In December 2006, U.S. advisors began assisting the Ministries of Defense and Interior in procuring needed equipment for their security forces from the United States through the foreign military sales (FMS) program. While available data from the government of Iraq and analysis from U.S. and coalition officials show that spending has increased compared with spending in 2006, a September 2007 GAO report noted that a large portion of Iraq’s $10 billion capital projects and reconstruction budget in fiscal year 2007 will likely go unspent. Iraq’s government confronts significant challenges in staffing a professional and nonpartisan civil service and addressing militia infiltration of key ministries. Moreover, U.S. officials noted that affected ministries are less responsive to U.S. government capacity development efforts. A DOD report notes that many Iraqi ministry staff were selected because of their partisan affiliation. We further reported in January 2007 that the Iraqi civil service remains hampered by staff whose political and sectarian loyalties jeopardize the civilian ministries’ ability to provide basic services and build credibility among Iraqi citizens, according to U.S. government reports and international assessments. The DOD report further stated that government ministries and budgets are sources of power for political parties, staff ministry positions rewarded to party cronies for political loyalty. According to U.S. officials, this use of patronage can hinder capacity development because it leads to instability in the civil service as many staff are replaced whenever the government changes or a new minister is named. As of early August 2007, for example, 15 of the 37 Iraqi cabinet members had withdrawn from Prime Minister Maliki’s government. Six Sadrist ministers announced their resignation as a protest against the continued presence of coalition forces in April 2007, and five of their seats remain vacant as of August 2007. In early August, six Sunni ministers resigned and three additional ministers announced they would boycott cabinet meetings. Some Iraqi ministries under the authority of political parties hostile to U.S. goals use their positions to pursue partisan agendas that conflict with the goal of building a government that represents all ethnic groups. Moreover, U.S. military advisors to one of the security ministries note that Iraqi intelligence organizations are particularly hindered by infiltration because their officials believe they cannot execute intelligence operations for fear of betrayal by their colleagues. For instance, DOD reports that militia influence affects every component of the Ministry of Interior. In particular the Ministry has been infiltrated by members of the Supreme Islamic Council of Iraq or its Badr Organization and Muqtada al-Sadr’s Mahdi Army. The Mahdi Army often operates under the authority or approval of Iraqi police to detain, torture, and kill Sunni civilians. Until late April 2007, the Ministries of Agriculture, Health, Civil Society, Transportation, Governorate Affairs, and Tourism were led by ministers loyal to al-Sadr, who provided limited access to U.S. officials. U.S. embassy officials noted that the effectiveness of U.S. programs is hampered by the presence of unresponsive or anti-U.S. officials. Several U.S. embassy officials noted that one of the key ministries targeted by U.S. capacity development and budget execution efforts was particularly unresponsive to U.S. efforts to reform and improve its processes. For example, a USAID official stated that no staff from this ministry had attended USAID-sponsored budgeting, procurement, and other public management training at the National Training Center as of February 2007. Furthermore, while a senior U.S. advisor noted his frequent contacts with this minister, he is affiliated with the Supreme Council for Islamic Revolution in Iraq and his level of cooperation with U.S. capacity development efforts remains limited. According to a State document, widespread corruption undermines efforts to develop the government’s capacity by robbing it of needed resources, some of which are used to fund the insurgency; by eroding popular faith in democratic institutions seen to be run by corrupt political elites; and by spurring capital flight and reducing economic growth. In addition, an IRMO document noted that corruption is affecting the ability of critical ministries to deliver essential services. According to an IRMO assessment, one-third of the civilian ministries surveyed had a problem with “ghost employees” (i.e., nonexistent staff listed on the payroll). In addition, the procedures to counter corruption adopted at all but one of the civilian ministries surveyed were either assessed as only partly effective or ineffective. Similar problems existed in the security ministries, according to two 2007 DOD reports. Efforts to help the Iraqi government develop the capacity of its anticorruption entities have had mixed results. On the one hand, the government has made progress in developing its three main anticorruption bodies—the Commission for Public Integrity (CPI), the Board of Supreme Audit (BSA), and the inspector generals assigned to each ministry. According to U.S. officials, the government also has made progress developing the courts necessary to investigate and prosecute government corruption with the assistance of the U.S. government and its coalition and international partners. Moreover, the Ministry of Finance approved funding to increase the number of inspector general staff at the Ministry of the Interior by 1,000 during 2007. The U.S. embassy also created the Office of Accountability and Transparency (OAT) to help the Iraqis develop a national anticorruption strategy, identify capacity development needs, and combat money laundering. It also helped the government initiate its Joint Anti-Corruption Council (JACC) in February 2007, which brings together the primary anticorruption entities under the leadership of the Prime Minister. On the other hand, Iraq’s anticorruption entities face challenges. For example, in October 2007, the head of Iraq’s Commission for Public Integrity, testified that violence, intimidation, and personal attacks were a main obstacle to the Commission’s work. He stated that 31 of his staff had been assassinated since the establishment of the Commission and some of the staff and their family members had been kidnapped or detained. Another challenge is the existing legal structure. According to the Special Inspector General for Iraq Reconstruction, Article 136(b) of Iraq’s Criminal Code is a structured obstacle impeding Iraq’s anti-corruption efforts. This provision allows any Iraqi minister to grant by fiat complete immunity from prosecution to any ministry employee accused of wrongdoing. The Inspector General also stated that an order issued by the Prime Minister this past spring requires Iraq law-enforcement authorities to obtain permission from the Prime Minister’s Office before investigating current or former ministers. Numerous U.S. and coalition officials stated that the security situation remains a major obstacle to their efforts to help Iraqis develop capacity in areas vital to the government’s success. The high level of violence hinders U.S. advisor access to their counterparts in the ministries, directly affects the ability of ministry employees to perform their work, and hinders the effectiveness of U.S. capacity development programs, according to these officials. State and USAID efforts are affected by the U.S. Embassy security restrictions imposed on their movement. Embassy security rules limit, and in some cases bar, U.S. civilian advisors from visiting the ministries outside the Green Zone. For example, the senior IRMO finance advisor noted that that his team has regular access to the Finance Minister, who is located in the Green Zone. However, his team cannot visit the Ministry of Finance outside the Green Zone and has limited contact with ministry officials. Moreover, efforts to complete the installation of the FMIS system stopped after a British BearingPoint contractor and his security team were kidnapped from the Ministry of Finance in May 2007. Nevertheless, according to a State cable, an embassy organizational and staffing review concluded in late May 2007 that the embassy’s security rules were overly restrictive for embassy staff to perform their work, leading the ambassador to recommend the embassy adopt less restrictive military security standards. The security situation also complicates the capacity development efforts of the MNSTC-I advisors to the security ministries. A U.S. MNSTC-I advisor noted that the MOI headquarters is 20 minutes from the Green Zone and is particularly unsafe because sectarian militias control different floors of the building and differ in the degree to which they are hostile to the coalition forces. As a result, U.S. advisors have to be accompanied by two armed U.S. guards while visiting their Iraqi counterparts and must leave certain offices and departments no later than 10 p.m. The MOD, which is in the Green Zone, is a comparatively safe work environment for the embedded DOD advisors. International officials noted that about half of Iraqi government employees are absent from work daily; at some ministries, those who do show up only work between 2 to 3 hours a day for security reasons. U.S. and UN officials stated that, while the Ministry of Planning has a relatively skilled workforce, the security situation seriously hinders its ability to operate. These officials noted that 20 director generals (department heads or other senior officials) in the ministry have been kidnapped, murdered, or forced to leave the ministry in the 6 months prior to February 2007. One international official stated that violence is also affecting their effort to build capacity in the university system from which the government draws some of its expertise. She noted that about 360 university professors have been killed since 2003. The violence is also contributing to a brain drain within the Iraqi ministries as staff join growing numbers of refugees and internally displaced persons. According to a UN report, between March 2003 and June 2007, about 2.2 million Iraqis left the country and 2 million were internally displaced. According to U.S. and international officials, the flow of refugees exacerbates Iraqi ministry capacity shortfalls because those fleeing tend to be disproportionately from the educated and professional classes, thereby reducing the pool of qualified personnel from which the ministries can recruit. For example, according to international officials, the Iraqi medical association estimated that half of Iraq’s 34,000 registered doctors had left the country by November 2006 and over 2,000 of the remainder had been killed. Moreover, a November 2006 UN report stated that it was estimated that at least 40 percent of Iraq’s professional class had left since 2003. The exodus of employees from the ministries limits U.S. efforts to develop ministry capacity. One Iraqi official complained that the skilled personnel selected for international capacity development training were more prone to leave government employment. The U.S. government is just beginning to develop an overall strategy for its capacity development efforts. GAO’s previous analyses of U.S. multiagency national strategies have found that an integrated strategy should include a clear purpose, scope, and methodology; delineation of U.S. roles, responsibilities, coordination, and integration; desired goals, objectives, and activities; performance measures; and a description of costs, resources needed, and risk. The three agencies leading capacity development efforts in Iraq, particularly MNSTC-I, have developed some of these elements for their individual programs at the ministries, but not as part of a unified strategy for all U.S. efforts. U.S. officials reported in January 2007 that the conditions and challenges facing U.S. capacity development efforts in Iraq have impeded a structured, traditional approach to capacity development. This makes it difficult to develop an overall strategy. Nonetheless, the need for an overarching capacity development strategy is clear given that the President has identified ministry capacity development as a key to the success in Iraq, has called for greater integration of U.S. civilian and military efforts to develop Iraqi government capacity, and has requested at least $255 million in additional funding in fiscal year 2008 for these efforts. Moreover, a January 2007 report by the Iraqi National Security Council took steps to identify the critical efforts and coordination needed at key civilian ministries to support the Ministries of Defense and Interior. The report also indicated that Iraqi ministries depend on each other and need to function as a unified government. In February 2007, State Department officials provided GAO with a three- page, high-level outline proposing a U.S. strategy for strengthening Iraqi ministerial capacity. This document was a summary with few details, and State officials have not provided GAO with a timeline for completing this overall strategy. A senior USAID official indicated that it is uncertain whether the high-level summary will be developed into a strategy, although the President has received $140 million in additional funding for these efforts for fiscal year 2007. The summary noted that the capacity development strategy would be guided by the April 2006 Joint Campaign Plan issued by Embassy Baghdad and the MNF-I. In addition, it stated that the U.S. government would assist the Iraqi government in strengthening the ministries’ capacity to perform core functions, such as developing sufficient long-term plans and policies, proper legal and regulatory frameworks, transparent financial systems, and effective technology. The summary also called for U.S. agencies to coordinate efforts and approaches. Finally, it called for U.S. agencies to plan these efforts in consultation with the Iraqi ministries and work with the ministries to determine their needs and priorities. GAO has previously identified the desirable elements of a strategy: a clear purpose, scope, and methodology; a delineation of U.S. roles, responsibilities, and coordination with other donor nations and international organizations, including the UN; desired goals, objectives, and activities; performance measures; and a description of costs, resources needed, and risk. U.S. agencies have developed some of these elements in their programs for individual ministries but not as part of an overall U.S. strategy. Table 3 summarizes and describes the key elements of a strategy and provides examples of the status of the U.S. approach thus far and cites practices by some agencies at individual ministries that could be incorporated into an overall U.S. strategy. Clear purpose, scope, and methodology. We found little evidence that the U.S. government has clearly defined the purpose, scope, and methodology for developing an overall strategy. Agencies have provided some limited information on why an overall strategy is needed, what it will cover, and how it will be developed. Although the high-level outline for the overall capacity development strategy provided bullets about the purpose of U.S. capacity development efforts, it did not define capacity development or other key terms. Furthermore, it did not provide the context for such a program, such as whether it drew upon lessons learned from previous USAID, World Bank or other capacity development efforts. In terms of scope, the high-level summary indicated that the strategy would guide U.S. efforts to build capacity at the Prime Minister’s Office and the Iraqi Ministries, but did not identify specific ministries or determine which ministries were priorities or how those priorities had shifted in 2007. In addition, in terms of methodology, U.S. officials indicated only that an interagency task force would develop the strategy but not how they would do so. U.S. roles, responsibilities, and coordination. The multi-agency Joint Task Force on Capacity Development (JTFCD), established in October 2006, has helped U.S. agencies better delineate roles and responsibilities for ministry capacity development and to better coordinate efforts. However, the high-level outline and other potential strategy documents we reviewed do not address how overall efforts are to be integrated and unified. The JTFCD began cataloguing all U.S. capacity development efforts in late 2006. According to USAID officials, this effort helped inform U.S. agencies of each other’s work and helped identify responsibilities. The JTFCD has also helped coordinate efforts. For example, to avoid potential overlap, during a February 2007 JTFCD meeting, USAID worked out a way to allow officials from the security ministries to participate in budget training courses that were previously limited to the civilian ministries. However, the high-level outline and other planning documents we identified do not specify how the Embassy, USAID, and MNSTC-I capacity development efforts will be unified and integrated, such as how MNSTC-I’s security cooperation office will be transitioned into an office within the embassy. Nor do they discuss a potential lead agency to continue overall capacity development efforts, as was proposed in 2005. Moreover, other efforts to improve cooperation with the UN and other international donor nations and organizations have encountered difficulties. For example, the outline states that U.S. efforts are to be coordinated with the Iraqi government and the international donor community through the Capacity Development Working Group. Chaired by the Minister of Planning, this group was intended to secure Iraqi government input and commitment to U.S., coalition, and other donor partner capacity development objectives at the civilian ministries, but the group did not meet for about a year after forming in late 2005 and has not met since February 2007. Appendix III provides more information on the UN, other donor partners, and international organizations that have conducted efforts to build the capacity of the Iraqi government since 2003. Desired goals, objectives and activities. U.S. agencies have clearly identified the overall goals of capacity development at the Iraqi ministries, but most U.S. efforts lack clear ties to Iraqi priorities for all ministries. According to a February 2007 U.S. embassy briefing, the desired end-state for capacity development efforts is clearly defined: to assist Iraq’s transition to self-sufficiency by enabling the government to provide security and rule of law, deliver essential services, and develop a market- driven economy through democratic processes. The U.S. embassy and MNSTC-I have also identified overall goals for Iraqi ministry capacity development, such as improving service delivery, improving accountability, and reforming leadership and management skills. Moreover, MNSTC-I has taken clear steps to incorporate Iraqi priorities for its efforts at MOD. MOD’s national defense priorities are stated in the Policy of the Ministry of Defense 2006-2011. This document, which was approved and signed by the Minister of Defense, specifies MOD’s mission, values, and priorities in areas such as finance, personnel, training, and logistics. According to U.S. advisors and documents, the Ministries of Health, Electricity and Municipalities and Public Works have also demonstrated their commitment to U.S. objectives by developing capacity development organizations within each ministry to identify their specific needs and priorities. However, not all U.S. capacity development efforts are as clearly linked to Iraqi-identified needs and priorities, which may affect the sustainability of key U.S. capacity development efforts once they are turned over to the Iraqis. USAID’s capacity development plans were to help the Iraqis develop and administer ministry self-assessments to identify Iraqi needs and priorities. However, USAID officials stated in May 2007 that it was unclear when implementation of this critical effort would begin. Moreover, other efforts to secure greater Iraqi input beyond an ad hoc basis, such as the Capacity Development Working Group, have not succeeded. A January 2007 SIGIR report found that ministry capacity efforts were being conducted “based upon individual understandings reached between the Iraqi ministers and U.S. agency officials,” raising questions about whether the U.S. had obtained adequate input and commitment from the Iraqi government. Performance measures. U.S. agencies implementing capacity development projects have not developed performance measures for all of their efforts, particularly outcome-related performance measures that would allow them to determine whether U.S. efforts at the civilian ministries have achieved both U.S. and Iraqi desired goals and objectives. The U.S. embassy did conduct a baseline assessment in August 2006 of the civilian ministries to gauge their capacity to plan, prepare an operating budget, and conduct key tasks rather than the progress or impact of ministry capacity efforts. The assessment was completed by U.S. senior advisors and included indicators such as whether a ministry had a strategic plan and the percentage of budgeted funds disbursed in the previous year. U.S. officials stated that an updated State assessment of the civilian ministries was scheduled for completion at the end of June 2007, but the embassy decided in July not to continue this effort, according to embassy officials. In comparison, MNSTC-I is developing metrics to measure the progress and impact of efforts at the security ministries. MNSTC-I began conducting monthly assessments of MOD and MOI in mid-2006. However, in April 2007, MNSTC-I officials stated that the Commanding General decided to retool the assessment in consultation with the Iraqi government to better gauge the results of U.S. efforts. Officials stated that monthly assessments are being conducted at the field level to determine whether the MOD and MOI are ensuring Iraqi security forces units are sufficiently manned, have required weapons, and are being paid. MNSTC-I officials stated that they also recently began conducting quarterly assessments to determine what tasks or processes at the ministries may need to be adjusted to achieve results in the field. For example, the new assessment might determine whether capacity development efforts help MOD recruit and retain enough troops to maintain manning requirements. Officials were not able to share the new assessments with us because they are still being developed. Future costs, resource needs, and risk. The overall strategy should also address costs, priorities, and resources needed to achieve the end-state and how the strategy balances benefits, costs, and risks. Guidance on costs and resources needed using a risk management approach would assist Congress and implementing organizations to make resource decisions. Although U.S. agencies have provided data on U.S. funding for current capacity development efforts at the Iraqi civilian and security ministries, agencies have not identified the costs and resources needed beyond the budget requests for fiscal years 2007 and 2008. Moreover, they have not determined how much funding overall is necessary to achieve the stated long-term goal of a self-sufficient Iraqi government. Without these cost data, neither U.S. agencies nor Congress can reliably determine the cost of capacity development, which U.S. and international officials have noted is a long-term process. In addition, agencies have not provided information on how future resources will be targeted to achieve the desired end-state or, given the challenging situation in Iraq, how allocations balance benefits, costs, and efforts to address risks, such as addressing the risks associated with the four challenges identified above. U.S. programs to improve the capacity of Iraq’s ministries must address significant challenges if they are to achieve their desired outcomes. U.S. efforts lack an overall strategy: No lead agency provides overall direction, and U.S. priorities have been subject to numerous changes. In addition, U.S. efforts confront shortages of competent personnel at Iraqi ministries and sectarian Iraqi ministries contending with pervasive corruption. The risks are further compounded by the ongoing violence in Iraq as U.S. civilian advisors have difficulties meeting with their Iraqi counterparts and skilled Iraqi professionals leave the country. U.S. agencies have provided $169 million to improve the capacity of Iraq’s ministries as of the end of 2006. Congress appropriated $140 million more in May 2007 and the Administration wants up to $255 million for fiscal year 2008. We believe that future U.S. investments must be conditioned on the development of a unified U.S. strategy that clearly articulates agency roles and responsibilities, delineates the total costs needed, addresses risks, and establishes clear goals and measurements. Given the risks U.S. agencies face in implementing capacity development in Iraq and the funds being requested, GAO recommends that State, in consultation with the Iraqi government, complete an overall integrated strategy for U.S. capacity development efforts. Key components of an overall capacity development strategy should include a clear purpose, scope, and methodology; a clear delineation of U.S. roles, responsibilities, and coordination, including the designation of a lead agency for capacity development; desired goals, objectives, and activities, based on Iraqi- identified priorities; performance measures based on outcome metrics; and a description of how resources will be targeted to achieve the desired end-state balancing benefits, costs, and both internal risks (such as potential changes in cost, schedule, or objectives) and external risks (such as an increase in violence or militia influence). Given the absence of an integrated capacity development strategy, it is unclear how further appropriations of funding for ministry capacity development programs will contribute to the success of overall U.S. efforts in Iraq. Congress should consider conditioning future appropriations on the completion of an overall integrated strategy incorporating the key components identified above. We provided a draft of this report to the Departments of Defense and State, and USAID. DOD did not provide comments. State provided written comments, which are reprinted in appendix IV. State also provided technical comments, which we incorporated where appropriate. USAID noted that its comments were incorporated into State’s written response. In commenting on a draft of this report, State commented that it recognized the value of a unified strategy. However, it noted its concern over our recommendation to condition future appropriations for capacity development on the completion of a strategy. State also noted the recent appointment of an ambassador to supervise all short- and medium-term capacity development programs. Moreover, it stated that a strategy is only one element in a complex process that needs to be tailored to the needs and priorities of each Iraqi ministry or government organization. We do not recommend stopping U.S. investment in capacity development; the $140 million in supplemental funding appropriated in fiscal year 2007 remains available for the agencies to continue their efforts. Rather, we recommend that Congress condition future funding on the development of an overall integrated strategy. We acknowledge that State named an ambassador to coordinate the embassy’s economic and assistance operations, including supervising civilian capacity development programs. However, this action occurred in August 2007, underscoring our point that U.S. capacity development efforts have lacked overall leadership and highlighting the need for an overall integrated strategy. Finally, our recommendation does not preclude U.S. agencies from tailoring capacity development efforts to meet each ministry’s unique needs. A strategy ensures that a U.S.-funded program has consistent overall goals, clear leadership and roles, and assessed risks and vulnerabilities. We are sending copies of this report to interested congressional committees. We will also make copies available to others on request. In addition, this report is available on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8979 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. The following are GAO’s comments on the Department of State’s letter dated September 10, 2007, and USAID’s letter dated September 13, 2007. 1. We do not recommend stopping U.S. funds investment in capacity development. In fact, the President received an additional $140 million for capacity development efforts in May 2007 from the fiscal year 2007 supplemental funds. However, we recommend that Congress consider conditioning the administration’s request for up to $255 million in additional funds for fiscal year 2008 on the completion of an overall integrated strategy incorporating the key components identified in the report. Without these key components, Congress may lack the critical information needed to weigh risks and judge U.S. costs, progress, and results of the current capacity development programs. 2. We changed our draft report to acknowledge this July 2007 change in the U.S. Embassy Baghdad’s organizational arrangements for the conduct of capacity development programs. However, this initiative is relatively new; it is too soon to evaluate whether this action has helped address coordination and leadership. This recent action underscores our point that U.S. capacity development efforts have lacked overall leadership and highlights the need for an overall integrated strategy. This is particularly so since capacity development efforts for rule of law and the security ministries are under separate leadership. 3. Our recommendation does not preclude U.S. agencies from tailoring capacity development efforts to meet each ministry’s unique needs. A strategy ensures that a U.S.-funded program has consistent overall goals, clear leadership and roles, and assessed risks and vulnerabilities. 4. We did not discuss project-level capacity development efforts at length as the focus of this engagement was on the ministry-level efforts deemed critical by State; however, we did note the substantial contributions made at the project level by the U.S. Army Corps of Engineers’ Gulf Region Division. 5. This report notes that various U.S. agencies pursued separate ministry- level capacity development efforts at various Iraqi ministries between 2004 and 2005 without the benefit of an overall strategy. We also note in this report that the U.S. embassy itself advocated in 2005 that an integrated strategy be adopted with a lead agency in charge, using as a justification its finding that capacity development efforts up to that time had been implemented in an uncoordinated and sometimes overlapping fashion and that its efforts had been fragmented, duplicative, and disorganized. 6. We note the comparative success MNSTC-I has achieved with its relatively intensive efforts at the security ministries. For example, we note that a senior MNSTC-I advisor worked with MOD staff to develop its counterinsurgency strategy. We also note in table 3 and elsewhere MNSTC-I’s comparative success in developing some aspects of a unified, integrated strategy. 7. We have acknowledged the importance of addressing the shortcomings in Iraqi budget execution and procurement procedures in this and previous reports and testimonies. For example, our September 2007 report on whether Iraq had met 18 key benchmarks stated that the government of Iraq has had difficulty spending its resources on capital projects and that some of the reported improvements in budget execution stem from funding releases to the provinces. Our September 2007 report also noted that a “commitment” in Iraq is similar to an obligation under the U.S. budget process. These commitments are not expenditures and may not be reliable indicators of future spending by ministries and provinces. Moreover, the government of Iraq’s official expenditure data, as reported by the Ministry of Finance, does not include commitments or obligations. Finally, the report notes that it is unclear whether government funds committed to contracts are a reliable indicator of actual spending. 8. We contend that budget execution rates may not be one of the best measures of effective capacity development. Our September 2007 report noted that, given the capacity and security challenges currently facing Iraq, many contracts that have government funds committed to them may not be executed and thus would not result in actual expenditures. Moreover, until more complete data on actual capital project expenditures become available, it may be premature to conclude that U.S. efforts to improve budget execution have had a “highly significant impact” on ministry capacity. We are currently conducting a review of U.S. efforts to help Iraq spend its budget and will issue a report at a later date. 9. This report acknowledges the contributions of the JTFCD to coordinating and cataloging all U.S. capacity development efforts in late 2006. However, the draft planning documents we identified do not specify how the JTFCD or other coordination groups will integrate Embassy, USAID, and MNSTC-I capacity development efforts. Further, we noted in the report that the Capacity Development Working Group, chaired by the Minister of Planning, was intended to secure Iraqi government input and commitment to U.S. and coalition capacity development objectives at the civilian ministries. However, the group did not meet for about a year after forming in late 2005 and has not met since February 2007. The Ministerial Engagement Teams are a coordinating arrangement introduced in mid-2007; it is too soon to evaluate their activities or results. 10. This report does not view short-term activities as a negative outcome. We do note that IRMO originally justified conducting short-term efforts in an attempt to jump-start capacity development in 2006 using more readily available funding. These programs would complement and support a follow-on USAID effort to conduct longer-term capacity development programs. Most of State’s short-term efforts did not begin until after USAID began its capacity development programs under its medium-term contract because of delays in the formation of the Iraqi government and in receiving fiscal year 2006 funding. Moreover, USAID officials stated that they began implementing a number of short-term efforts earlier than originally planned to address more immediate shortfalls in the Iraqi government’s capacity to plan and execute ministry budgets. 11. We addressed these elements as they were among the core ministry functions identified as common to all the key ministries. GAO reviewed how these common functions were defined and what metrics were used by State and DOD to track these elements in their assessments of the status of key ministries’ capacity development. We also noted the existence of the scholarship program as an example of a USAID capacity development program in table 2 but did not otherwise discuss it. In this report, we (1) assess the nature and extent of U.S. efforts to develop the capacity of the Iraqi ministries, (2) assess the key challenges to these efforts, and (3) assess the extent to which the U.S. government has an overall strategy for these efforts that incorporates key elements. For the purposes of this review, which we undertook under the Comptroller General’s authority to conduct reviews on his own initiative, we focused on key U.S. capacity development efforts initiated or ongoing in fiscal years 2006-2007, primarily those efforts begun after the start of the National Capacity Development Program in late 2005, the U.S. Mission Baghdad’s attempt to focus and better coordinate U.S. efforts. To describe these programs, we reviewed U.S. government documents including the Department of State’s (State) quarterly section 2207 reports to Congress from October 2004 to April 2007 on the use of Iraq Relief and Reconstruction Funds; State’s quarterly section 1227 reports to Congress from April 2006 to April 2007 on current military, diplomatic, political, and economic measures undertaken to complete the mission in Iraq; the U.S. Agency for International Development (USAID) contract awarded in July 2006 to Management Systems International, Inc., Building Recovery and Reform through Democratic Governance National Capacity Development Program; reports on USAID’s implementation of the Iraqi Financial Management Information System under the Economic Governance Project II; the U.S. Embassy-Baghdad Joint Task Force for Capacity Development’s catalogue of U.S. capacity development efforts fromApril 2007; the Department of Defense’s (DOD) quarterly reports to Congress, Measuring Security and Stability and Iraq, from July 2005 to June 2007; and Multi-National Security Transition Command-Iraq’s 2007 Campaign Action Plan. We reviewed the results of the Iraq Reconstruction Management Office’s (IRMO) September 2006 Ministerial Capacity Metrics Assessment and deemed the results sufficiently reliable to provide a broad indication of the strengths and weaknesses of the ministries surveyed. We found the procedures followed by IRMO in creating the assessment, compiling the results, and assessing data reliability to be reasonable. However, the data had significant limitations. For example, a number of subquestions were not answered for all ministries. We also examined the federal government’s fiscal years 2006, 2007, and 2008 regular and supplemental budget requests for State, USAID, DOD for capacity development efforts for the Iraqi government. Moreover, we reviewed previous GAO reports and reviews and periodic reporting from the Office of the Special Inspector General for Iraq Reconstruction (SIGIR), including its January 2007 report Status of Ministerial Capacity Development in Iraq. We also interviewed key U.S. government officials from State, USAID, DOD, and relevant contractor officials in Washington, D.C.; Iraq; and Jordan. We conducted interviews over the telephone and made site visits to Iraq and Jordan in February 2007. To assess key challenges to U.S. capacity development efforts, we reviewed and analyzed the documents mentioned above and other relevant plans, reports, and data from the Iraqi government. We designated the identified challenges as key, based on evidence presented in previous GAO reports, the frequency they were cited by U.S. officials and documents, and the importance they accorded their impact on U.S. capacity development objectives. We interviewed U.S. government officials from the Departments of State, Defense, Treasury, Justice and the Agency for International Development in Washington D.C., Iraq, and Jordan; and the Multinational Forces-Iraq (MNF-I); and other donors, including officials from the United Nations and its associated relief and development agencies, the World Bank, the European Union, the United Kingdom’s Department for International Development (DFID), and the Canadian International Development Agency. We also analyzed data on Iraq’s 2006 and 2007 budgets and 2006 budget execution through December 2006, which were provided to us by the U.S. Treasury from Iraq’s Ministry of Finance, and found that these data were sufficiently reliable for our purposes. We also interviewed relevant U.S. government officials or contractor officials working with BearingPoint; Management Systems International, Incorporated (MSI); and Military Professional Resources, Incorporated (MPRI). Finally, to examine the extent to which the U.S. government has an overall strategy for these efforts that incorporate key elements, we reviewed and analyzed, in addition to the abovementioned documents, the July 2007 Joint Campaign Plan issued by Embassy Baghdad and the Multinational Force-Iraq (MNF-I); the Multi-National Security Transition Command- Iraq 2007 Campaign Action Plan; the November 2005 National Strategy for Victory in Iraq; the President’s February 2007 New Way Forward strategy for Iraq; Iraqi government’s National Development Strategies for 2005-2007 and for 2007-2010; State’s February 2007 strawman (a three page, high- level summary) for a U.S. government strategy to strengthen Iraqi ministerial capacity; and the September 2006 U.S. Embassy-Baghdad assessment of the capacity of key Iraqi civilian ministries to perform core functions. We interviewed key U.S. officials from State, USAID, DOD, and other relevant agencies. We reviewed previous GAO reports that identified the desirable characteristics of a national strategy, including Combating Terrorism: Evaluation of Selected Characteristics in National Strategies Related to Terrorism, Rebuilding Iraq: More Comprehensive National Strategy Needed to Help Achieve U.S. Goals, Rebuilding Iraq: More Comprehensive National Strategy Needed to Help Achieve U.S. Goals and Overcome Challenges, Intellectual Property: Initial Observations on the STOP Initiative and U.S. Border Efforts to Reduce Piracy, and GAO, Intellectual Property, Strategy for Targeting Organized Piracy (STOP) Requires Changes for Long-term Success. We analyzed the information we obtained on U.S. capacity development efforts to identify components of an overall strategy that the three agencies leading these efforts—State, USAID, and MNSTC-I—have developed to date. We conducted our work from August 2006 through August 2007 in accordance with generally accepted government auditing standards. Increase the ability of the Iraqi Government Ministries and Prime Minister’s Office to conduct business between each other on an immediate basis. Help PMO staff develop the capacity to initiate research, coordinate new legislative initiatives, and to track legislation. Provide specialists to help develop and implement appropriate policies, strengthen core public administration functions, and create training systems to improve delivery of services. Support efforts to reform and improve the investment climate and tax policies, develop a national anticorruption strategy, and promote citizen participation in government. To provide approximately 200 training modules to each of the 10 key ministries. Establish office in Ministry of Planning to facilitate budget procurement across all Iraqi ministries. Help Ministry of Planning implement capital budget and reform procurement in coordination with the international community and subject matter experts. Help Iraqi ministries execute budgets and provide metrics for leadership. Enable Ministry of Finance staff to perform better by strategic budget development, organizational modernization, regulation development, and future sector privatization. Provide 10 SMEs to help the ministry develop policies, especially for budget execution. Enable the Ministries of Finance, Planning, and other ministries’ staffs to perform core ministerial functions and tasks better through strategic budget development, future year budgeting, modernization, and development of regional automation, and regional fiscal accounting commonalities. Produce Sector Master Plans for the ministries to better focus resources on medium-to-long-term production, sustain outputs, and meet the goals necessary to deliver essential services to the public. Develop the ministry’s capacity to provide advice and technical support for draft legislation that promotes individual freedoms, human rights, and the rule of law within the context of the Iraqi constitution. Provide software to increase CPI capability to organize and cross-reference investigative data, making working practices more efficient and anti-corruption investigations more effective. Develop a tool for the BSA and each of the 29 ministerial Inspector Generals to assess core needs. May ’07 to May ’08 sector strategic planning. Provide specific training on international and domestic water laws and policies to ministry employees responsible for formulating, negotiating, interpreting, or applying water laws and policies. Provide training in essential skills, such as strategic and contingency planning; contracting management; and human resource management. Increase the organizational, accountability, inventory management, and technology capacities of Kimadia, (a state company for marketing medical appliances and equipment in Iraq). Install a computer network within the ministry to help it manage educational activities, improve accountability, and capture and report educational data. Provide nine staff with a mixture of skills to assist a variety of national-level programs, including: establishing guidelines for transport sector development; addressing information technology standards requirements; national identity card requirements; and mentor program for management staff in order to build sustainable Iraqi expertise. Includes only programs contracted, under way, or completed as of April 2007. provided $1.5 million in-kind assistance. The United Nations, the European Union, the United Kingdom, and the Canadian government also have conducted efforts to develop the capacity of the Iraqi government since 2003. The United Nations Assistance Mission for Iraq coordinates and oversees projects with capacity development components implemented by over a dozen UN agencies in Iraq. Most of these projects are funded through the International Reconstruction Fund Facility for Iraq (IRFFI)/United Nations Development Group Iraq Trust Fund (ITF). One effort implemented by the UN Development Program (UNDP) governance program provided basic management skills training for Ministry of Municipalities and Public Works employees at a cost of $3 million in February 2007. The UN International Organization for Migration in Iraq began implementing the Capacity Building in Migration Management Project in August 2004 with support of the Australian government. This ongoing project includes helping the Ministry of Interior establish a training center for immigration officers, with an information technology lab and a library with resource materials. The European Union (EU) provided about 16 million euro through the IRFFI/ World Bank ITF from 2003 to 2005 for two World Bank capacity- development projects. These two projects included efforts to train Iraqi staff at 19 ministries in topics such as policy reform, World Bank procurement policies, and basic training in MS Excel. The EU also provided about 42 million euro through the IRFFI/UN ITF from 2003 to 2006 for governance and civil society projects, including efforts to train Iraqi government officials in reconstruction management. The United Kingdom’s Department for International Development (DFID) has conducted capacity development efforts including a $23 million project that began in 2005 to provide consultants for the Ministry of Interior to provide training and assistance for MOI staff in such areas as procurement and legal and regulatory frameworks. Another $25 million effort that began in 2005 aims to provide technical and policy advice for the Ministry of Finance in areas such as subsidy reform and budget and expenditure management. DFID has coordinated its efforts with U.S. efforts by participating in meetings of the U.S. Joint Task Force for Capacity Development. The Canadian government has funded about $14 million worth of ministry capacity efforts for implementation from 2005-2008, including human and minority rights training for Ministry of Human Rights employees and assistance for a marshland restoration project with the Ministry of the Environment, the Ministry of Water Resources, and an Iraqi university. As of February 2007, trainers from 11 nations, including Iraq, provided basic instruction and more advanced administrative courses to develop the capacity of the Iraqi police at the Jordan International Police Training Center. Between October 2003 and February 2007, 50,300 Iraqi police graduated from the center, according to the training center director. Nations contributing instructors included Australia, Austria, Belgium, Canada, Croatia, Finland, Jordan, Slovenia, the United Kingdom, and the United States. In addition, Tetsuo Miyabara, Assistant Director; Daniel Cain; Lynn Cothern; Martin De Alteriis; Etana Finkler; Elisabeth Helmer; B. Patrick Hickey; Bruce Kutnick; and Mary Moutsos made key contributions to this report. | Iraq's ministries were decimated following years of neglect and centralized control under the former regime. Developing competent and loyal Iraqi ministries is critical to stabilizing and rebuilding Iraq. The President received $140 million in fiscal year 2007 funds and requested an additional $255 million in fiscal year 2008 to develop the capacity of the Iraq's ministries. This report assesses (1) the nature and extent of U.S. efforts to develop the capacity of the Iraqi ministries, (2) the key challenges to these efforts, and (3) the extent to which the U.S. government has an overall integrated strategy for these efforts. For this effort, GAO reviewed U.S. project contracts and reports and interviewed officials from the Departments of State (State), Defense (DOD), and the United States Agency for International Development (USAID) in Baghdad and Washington, D.C. Over the past 4 years, U.S. efforts to help build the capacity of the Iraqi national government have been characterized by (1) multiple U.S. agencies leading individual efforts, without overarching direction from a lead entity that integrates their efforts; and (2) shifting timeframes and priorities in response to deteriorating security and the reorganization of the U.S. mission in Iraq. First, no single agency is in charge of leading the U.S. ministry capacity development efforts, although State took steps to improve coordination in early 2007. State, DOD and USAID have led separate efforts at Iraqi ministries. About $169 million in funds were allocated in 2005 and 2006 for these efforts. As of mid-2007, State and USAID were providing 169 capacity development advisors to 10 key civilian ministries and DOD was providing 215 to the Ministries of Defense and Interior. Second, the focus of U.S. capacity development efforts has shifted from long-term institution-building projects, such as helping the Iraqi government develop its own capacity development strategy, to an immediate effort to help Iraqi ministries overcome their inability to spend their capital budgets and deliver essential services to the Iraqi people. U.S. ministry capacity efforts face four key challenges that pose a risk to their success and long-term sustainability. First, Iraqi ministries lack personnel with key skills, such as budgeting and procurement. Second, sectarian influence over ministry leadership and staff complicates efforts to build a professional and non-aligned civil service. Third, pervasive corruption in the Iraqi ministries impedes the effectiveness of U.S. efforts. Fourth, poor security limits U.S. advisors' access to their Iraqi counterparts, preventing ministry staff from attending planned training sessions and contributing to the exodus of skilled professionals to other countries. The U.S. government is beginning to develop an integrated strategy for U.S. capacity development efforts in Iraq, although agencies have been implementing separate programs since 2003. GAO's previous analyses of U.S. multiagency national strategies demonstrate that such a strategy should integrate the efforts of the involved agencies with the priorities of the Iraqi government, and include a clear purpose and scope; a delineation of U.S. roles, responsibilities, and coordination with other donors, including the United Nations; desired goals and objectives; performance measures; and a description of benefits and costs. Moreover, it should attempt to address and mitigate the risks associated with the four challenges identified above. U.S. ministry capacity efforts to date have included some but not all of these components. For example, agencies are working to clarify roles and responsibilities. However, U.S. efforts lack clear ties to Iraqi-identified priorities at all ministries, clear performance measures to determine results at civilian ministries, and information on how resources will be targeted to achieve the desired end-state. |
IHS, an operating division of HHS, is responsible for providing health services to federally recognized tribes of American Indians and Alaska Natives. According to IHS, in 2008, it provided health services to approximately 1.9 million American Indians and Alaska Natives from more than 562 federally recognized tribes. As an operating division of HHS, IHS is included in the agency’s consolidated financial statement and has not been audited independently since 2002. IHS is divided into 12 regions with 161 service units throughout the country. Service units may contain one or more health facilities, including hospitals, health centers, village clinics, health stations, and school health centers. There are 124 IHS-operated health facilities and 522 tribally operated health facilities. The IHS budget appropriation in 2008 was $3.39 billion. Overall, over 40 percent of the IHS budget authority appropriation is administered by tribes, primarily through various contracts and compacts with the federal government. We found that property continues to be lost or stolen at IHS at an alarming rate. From October 2007 through January 2009, IHS identified about 1,400 items with an acquisition value of about $3.5 million that were lost or stolen agencywide. These property losses are in addition to what we identified in our June 2008 report. Our full headquarters inventory testing and our random sample testing of six field offices estimated that over a million dollars worth of IT equipment was lost, stolen, or unaccounted for, confirming that property management weaknesses continue at IHS. Also, IHS headquarters and many IHS regions continue to reconcile 2008 inventory as of March 2009. In addition to the $3.5 million reported as lost or stolen, IHS also had thousands of unreconciled and unaccounted for property items with an acquisition value of $14.5 million missing about 2 months after conducting its 2008 inventory. These unreconciled and unaccounted for items had largely been located at four field locations that had over 40 percent of inventory items missing. Some of these items will likely be reported as lost or stolen. We analyzed IHS Report of Survey documents from fiscal years 2008 and 2009 covering the period of October 2007 through January 2009 for IHS headquarters, National Programs, and the 12 regions. These reports identified that about 1,400 items with an acquisition value of about $3.5 million were reported lost or stolen in little over a year. Some of the more egregious examples of lost or stolen property during October 2007 through January 2009 on reports of survey include the following: An audiometer—a machine used for evaluating hearing loss—with an acquisition value of $961 was “put out for trash” at an Oklahoma location that was new and listed in “UNUSED-GOOD” condition. A laboratory analyzer at a Navajo health care facility with an acquisition value of $37,000. A defibrillator with an acquisition value of $7,000 and over $13,000 in desktop and laptop computers that were new in June 2007 at a Tucson location. A telephone switch from National Programs in Albuquerque with an acquisition value of $25,500. A trailer with an acquisition value of $7,300 stolen from a Nashville Region Office parking lot over the weekend when the security gates were broken and remained open. We also found that about 2 months after conducting its 2008 inventory, IHS was still looking for about $14.5 million in items it identified as missing. Items that IHS continued to search for include the following: A 2002 ultrasound unit valued at $170,000, a 2003 X-ray mammography machine valued at $100,795, and a 2004 medication dispensing system valued at $168,285. A new pharmacy tablet counter with an acquisition value of $4,000 from a Washington location. A new electrocardiograph—a machine used to record the electrical activity of the heart—with an acquisition value of $4,000. Seven vital sign monitors from a South Dakota Hospital purchased at $731 each. Multiple dental chairs from a Kansas location with acquisition values of $3,200 each. High-dollar-value IT equipment purchased in 2006 including a Central Processing Unit with an acquisition value over $30,000 and two servers worth $29,000 and $12,000. A $14,000 John Deere tractor purchased in 2005. Unused IT equipment purchased in 2007 including laptops, desktops, an $11,000 server, and a television. Our physical inventory testing results were similar to IHS’s inventory results and confirmed lost, stolen, or unaccounted for property. Our full inventory testing at IHS headquarters identified that out of the 1,518 items tested that were on IHS’s inventory records as of December 5, 2008, 126 items with an acquisition value of $216,000 (or about 8 percent of the items tested) were lost, stolen, or unaccounted for—including 13 computers purchased in the summer of 2008. These 126 missing items were in addition to the 35 assets that IHS stated were missing in their physical inventory ending September 2008. The types of equipment missing included digital cameras, laptops, PDAs, and cell phones. Furthermore, we performed limited testing on new purchases made in fiscal year 2008 at IHS headquarters. We analyzed 19 new purchases to determine if the items existed and were on IHS books. We found that 10 of the 19 items that we tested were not in IHS’s inventory records as of December 2008. In addition, IHS could not account for 7 of the 19 items— 37 percent of the newly purchased equipment. We also identified examples of waste that we observed during our audit of IHS headquarters. During our exit conference discussions, IHS agreed some equipment may be underutilized. We identified the following examples of waste: One employee was issued a PDA but told GAO that he had not used it in 2 years. Another employee was issued a laptop and never used the laptop. One user was assigned three laptops but only used one of them. The employee stated that one of the laptops was to be disposed of but provided no explanation for the third laptop. We selected a probability sample of IT equipment inventory at six IHS field offices to determine whether the lack of accountability for inventory was confined to headquarters or present elsewhere within the agency. Our estimates are based on a probability sample of 250 items from a population of 6,085 IT equipment items worth over $19 million recorded in property records for IT equipment at the six field locations. Similar to our finding at IHS headquarters, our sample results indicate that a substantial number of pieces of IT equipment such as laptops, desktops, and printers were lost, stolen, or unaccounted for. Specifically, we estimate that for the six locations, about 800 equipment items with an acquisition value of $1.7 million were lost, stolen, or unaccounted for. This amounts to about 13 percent of all the IT equipment at these six locations. Table 1 below summarizes the disposition of the 250 sampled IT items. Weak “tone at the top” persists at IHS, with senior leadership failing to fully implement and enforce 8 of the 10 recommendations we made in June 2008. These failures strongly contribute to the continued loss and theft of property at IHS. Aside from issuing a memorandum from the IHS Director that restated and refined existing IHS policies, IHS has taken little action to provide assurance that employees are aware of and complying with property policies. One way to enforce policies involves holding individuals accountable. However, we found little evidence that IHS has held employees accountable for thousands of lost or stolen items worth millions of dollars. For example, in December 2008, the IHS executive in charge of the property group and other areas received a $13,000 performance award (8 percent of the executive’s salary) from IHS senior leadership. This award was granted 5 months after the July 2008 hearing exposed mismanagement of property under the executive’s purview. By failing to hold this key property management official accountable, the IHS Director and senior managers missed an opportunity to communicate the seriousness of IHS property problems to the responsible official. Although IHS has taken steps to update policy and perform physical inventories, most of our recommendations were only partially implemented. Of the 10 recommendations, IHS has fully implemented 2 and has begun taking steps to implement the remaining items. Table 2 shows the IHS action and status of implementing our recommendations. Although IHS took some steps in implementing our recommendations such as changing their policy on handling sensitive items to include Blackberries regardless of the threshold, we identified the following examples of problems fully implementing these corrective actions. Investigating circumstances surrounding missing property. We saw little improvement in investigating incidents of lost or stolen property. Without these investigations, IHS remains unable to hold individuals financially liable. Out of 1,400 items with an acquisition value of $3.5 million reported as lost or stolen in IHS Reports of Survey for fiscal year 2008 through January 2009, IHS could only provide one example in which an employee was found to be financially liable for lost or stolen property. However, as of February 2009, the individual has still not reimbursed the government for the loss—4 months after he was found financially liable. We identified other examples where individuals were not held accountable: One employee who was assigned a laptop that was missing told IHS property managers that she could not remember what happened to the laptop. IHS wrote the laptop off its books in September 2008 without holding the employee responsible. A laptop was given to an employee in Oklahoma to use at home but IHS did not issue a hand receipt. The employee left the agency and did not return the laptop. According to the Report of Survey, IHS did not hold the employee accountable because the employee left the agency. A Phoenix employee’s cellular phone was stolen after he left the phone on his desk overnight. The board of survey concluded that the individual was negligent in not properly safeguarding his cellular phone, but recommended that no assessment of liability be held against the employee. According to a Portland property officer, a laptop was stolen from an employee’s workstation. The workstation was accessible to the public and was not secured in accordance with HHS regulations. According to an IHS property official, the employee had several laptops assigned to him and he did not know that the computer was missing. However, the board did not hold the employee financially liable for the missing laptop. Enforcing annual physical inventories. Although IHS made progress by conducting a 100 percent physical inventory at IHS headquarters, National Programs, and all 12 regions for the first time in at least 4 years, improvements are needed in timely reconciliation of shortages and updating of its inventory records. Although physical inventories should be performed over a finite period, IHS officials performed extensive searches in an attempt to locate missing items before preparing Reports of Survey to write them off. As of March 2009, IHS headquarters property officials stated that IHS headquarters and many of the regions were still in the process of reconciling their 2008 physical inventory which they stated they completed in September 2008. For example, we verified in December 2008 that IHS was able to find the Jaws of Life medical equipment reported as lost or stolen in September 2006, but for about 2 years these items were unaccounted for. Furthermore, according to IHS property officials, a board of survey was recently established in March 2009 at IHS headquarters, but has not yet determined what actions should be taken to finalize Reports of Survey in order to update inventory records. In fact, IHS headquarters has not completed a Report of Survey to finalize inventories since 2004. Enforcing the use of hand receipts. HHS requires the use of hand receipts, known as HHS form 439, any time property is issued to an employee. This form should be retained by a property official so that property can be tracked at the time of transfer, separation, change in duties, or when requested by the proper authority. By signing this form, an IHS employee takes responsibility for the government-issued equipment. In our last audit, we found IHS headquarters did not use the HHS form 439, nor did they use any other type of hand receipt. To enforce this policy, the Director of IHS issued a memorandum in November 2008 which stated that a hand receipt should be signed by employees for all property issued in order to acknowledge receipt and assign responsibility. Based on our limited testing of hand receipts, we confirmed that IHS has begun to implement hand receipts at headquarters and a majority of the field locations where we performed site visits for items such as PDAs and laptops but has not yet started issuing hand receipts for all issued items such as desktops. Also, we found that some of the items we tested did not have hand receipts and one field location has not yet started issuing hand receipts for any type of property. Maintaining information on users and location. HHS requires IHS to document information on the user and the location of equipment, including building and room number, in order to easily track and locate property. Although the IHS Director included in his November 2008 memorandum a requirement to designate user information for each asset in PMIS, not all of the IHS field locations that we tested maintained specific user and location information in PMIS. Also, our tests of user and location data in PMIS at IHS headquarters and at the six field location shows that PMIS user and location information is not accurate. More specifically, IHS headquarters had user and location errors of 21 percent and 28 percent, respectively; these errors were much higher at the tested field locations at about 87 and 89 percent, respectively. As a result of inaccurate user and location information, field staff took several days to locate items that were included in our sample inventory, and IHS headquarters had delays in finding remaining inventory items during GAO’s full physical inventory. Inaccurate user and location information also contributes to the lengthy duration of IHS physical inventories— taking several months to reconcile and locate items. Enforcing the use of PMIS to create reliable inventory records. In a November 2008 memorandum, the IHS director mandated the use of PMIS and removal of all legacy systems. Despite this memorandum, our inquiries at field locations found that legacy systems are still being used. Training has not been completed at the property custodial officer level and not all service units have full access to edit and add items to PMIS. IHS’s database is incomplete—at IHS headquarters property officials identified over 500 items during their fiscal year 2008 inventory that need to be added to the PMIS database. This was also the case at one of the field locations where we performed our sample testing; the Aberdeen, South Dakota location has not entered any inventory assets into PMIS since 2007 (about 1,000 items) and stated that they have not been updating any system of record (neither their legacy nor PMIS) since August 2008. Another field location added that they only migrated about 60 percent of their inventory from their legacy system to PMIS. Our testing further verified the incompleteness of IHS’s inventory records identifying nearly half of the items selected at the six field locations as not recorded in PMIS. In addition to ensuring that inventory assets are included in PMIS, a reliable database also should remove from inventory records items that have been disposed of. Our tests showed that at IHS headquarters there was a 63 percent failure rate and the six field locations where we performed testing had an estimated 100 percent failure rate of removing disposed items from property records. We found examples of property items that had been disposed of as far back as 2003 still on the inventory records. IHS property officials said that some of the difficulty in removing the items arises in coordinating with the Program Support Center (PSC) which maintains the PMIS system and reviews and approves items to be removed from the records. Improved communication and procedures are needed to expedite removing disposal items from the inventory records. Because it has not entered all property information into PMIS or removed all assets that have been disposed of, IHS does not have reliable inventory records for management to be able to make sound purchase decisions. Physically securing assets. The IHS Director’s memorandum issued in November of 2008 stated explicitly the responsibility of supervisors and users of equipment to safeguard property from loss and misuse. However, during our inventory tests of IHS headquarters and selected field locations we still identified examples showing that the policy is not enforced. For example, we identified new IT equipment stored in unlocked vacant offices—see figure 1. Physical security weaknesses increase the risk of loss and theft. For example, we identified that a laptop, digital camera, and digital voice recorder, with a total acquisition value of $3,510, were stolen in April 2008 from an office at IHS headquarters. Failure to secure assets also leaves IHS vulnerable to data breaches. For example, in August 2008, a USB stick that contained personally identifiable information on six patients was stolen from IHS’s Phoenix health office. This theft has already been referred to HHS through their breach response process. We also identified a security vulnerability in which the lock for the computer server room in one of the region offices was broken. Rather than repair the door, IHS attempted to restrict access by posting a memorandum on the door—an ineffective means of securing expensive server equipment which could potentially contain sensitive information. See figure 2. A 1997 memorandum issued by the former IHS Director shows that problems related to lost and stolen property have existed at IHS for over 12 years. Although the memorandum indicates that individuals will be held financially liable for missing items, we found no evidence that IHS has ever taken such steps. As a result, property management problems have continued, and IHS property managers are now faced with the large challenge of gaining control under a decentralized and wide-ranging service structure. Although IHS has taken some steps to improve property management since our June 2008 report, our work shows that these steps are incomplete and that serious attention and effort is required to stop the alarming rate of property loss. Ultimately these problems hinder IHS’s mission to deliver health care to American Indians and Alaska Natives. We recommend that the Director of IHS strengthen IHS’s overall control environment and “tone at the top” by fully implementing our prior recommendations and enforcing and updating its property management policies and procedures. As part of this effort, the Director of IHS should direct IHS property officials to take the following six additional actions: Develop and enforce procedures and deadlines to reconcile and update inventory records in a timely manner. Establish specific deadlines and enforce them for finalizing a Report of Survey once an inventory has been completed so that research on missing items is completed expeditiously and does not continue indefinitely. Enforce policy to dispose of unused inventory in a timely manner. Establish an approach to stop loss of property to include addressing region-specific inventory shortages. Work with PSC to develop procedures to remove disposed items from inventory records in a timely manner. Work with PSC to develop procedures to enter overages in PMIS in a timely manner. We provided HHS with a draft of this report for review and comment. The Acting Assistant Secretary for Legislation of HHS provided written comments that are reprinted in appendix II. HHS agreed with all six of our recommendations to strengthen property management at IHS. As part of its response, HHS outlined actions it plans to take or has taken to address current and prior recommendations. The following represents a summary and overall evaluation of the HHS response. We also summarize and evaluate the actions IHS plans to take to address our recommendations. We provide comments on specific sections of the HHS response letter in appendix II. In its response, HHS stated that IHS is committed to proper and accountable property management. According to the response, IHS has spent thousands of hours to respond to our requests and to implement our recommendations. Further, IHS is confident that most, if not all, inventory currently unaccounted for will be identified as a result of the implementation of the PMIS system. HHS also highlighted that training on an agencywide scale has begun on PMIS and that employees are being educated on both the use of this system as well as on agency property policies and guidelines for accountability. A number of actions IHS has taken or plans to take to address our prior recommendations were summarized, including plans to address our current recommendations. We are pleased that IHS has devoted considerable time and resources to fixing its property management system and to respond to our audit requests. However, we are concerned that numerous significant issues raised by our report were not addressed in the response. Specifically, we are concerned that HHS did not acknowledge the rate of property loss at IHS and the continuing lack of employee accountability for millions of dollars of lost and stolen property. The response indicated that the implementation of PMIS will allow IHS to locate the property that it could not find during the 2008 annual inventory, but we note that completing annual physical inventories is key to identifying missing property. Therefore, IHS should focus on addressing our recommendation to reconcile and update inventory records in a timely manner in order to locate missing property. Further, we note that the accuracy of a system is only as good as the data that is put into it, and that our work has found ongoing, significant errors with the completeness and accuracy of the data input into the PMIS system. IHS has been attempting to implement the PMIS system for nearly 2 years, and our work shows that it continues to experience significant problems. While IHS struggles to implement PMIS, property losses continue. Regarding our recommendations, HHS agreed with all six of our new recommendations and cited actions that IHS will take to address them. However, the response to some of our recommendations provided little specificity on actions and timing. Further, for two of our recommendations, the HHS response listed actions with no clear link to our recommendations. GAO Recommendation #1. We recommended that IHS develop and enforce procedures and deadlines to reconcile and update inventory records in a timely manner. This recommendation resulted from our finding that IHS was unable to complete its 2008 inventory due to reconciliation issues and that millions of dollars in missing items cannot be found. To be effective, an annual inventory needs to be resolved quickly. However, the HHS response addressed inventory overages and indicated that policies regarding receiving and inspection, inventory management, reports of survey, and property disposal were “in process.” The link between this response and our recommendation is not clear and the issue of how IHS will hold staff accountable for completing the inventory in a timely manner is unresolved. This is a particularly important issue given that IHS will need to prepare for its 2009 inventory soon. GAO Recommendation #2. We recommended that IHS establish specific deadlines and enforce them for finalizing a Report of Survey once an inventory has been completed. The response to our recommendation indicates that IHS is enhancing the Report of Survey process to include timelines and guidance on Board of Survey requirements, but provides no specific details for when this process will be complete. IHS also does not address how it plans to enforce the new guidelines once they are in place. GAO Recommendation #3. We recommended that IHS enforce the policy to dispose of unused inventory in a timely manner. In response, HHS indicates that IHS has a number of agreements with other agencies to assist it in disposing of property. It also lists several actions it plans to take to address our recommendation, including establishing specific timeframes in a new policy to address timely disposal. HHS indicates that IHS will also “emphasize” a policy to conduct walk-through surveys of IHS facilities and remind staff that proper and adequate justifications must be provided for new acquisitions. Although these actions will help address our recommendation, many of the problems we identified with missing property related to a lack of enforcement of existing disposal policies. The IHS response does not include specific details on enforcement. GAO Recommendation #4. We recommended that IHS establish an approach to stop the loss of property to include addressing region- specific inventory shortages. This recommendation directly resulted from our finding that property loss continued at an alarming rate, and that some regions had substantial numbers of missing property. However, the HHS response did not address property loss and instead addressed the IHS policy for issuing hand receipts. The link between this response and our recommendation is not clear. The issuance of hand receipts is only part of the solution to stopping property loss and addressing region-specific inventory shortages. A more appropriate response would have involved the development of a strategic plan to stop property loss with a focus on specific regions, outlining the specific controls and enforcement procedures that should be put in place. GAO Recommendation #5. We recommended that IHS work with PSC to develop procedures to remove disposed items from inventory records in a timely manner. In response to our recommendation IHS stated that PSC distributed a procedures guide to all PMIS users in March 2009 and that PSC discussed the guide at an April meeting. According to HHS, the guide outlines the specific requirements and forms needed to process all final events in the property management system. These actions are a step forward, but they do not address how disposal will be completed in a timely manner. Further, HHS stated that a revised property disposal policy will also assist IHS property managers; however IHS provides no detail on what the new disposal policy entails, how it will improve inventory records, or when this policy will be updated. GAO Recommendation #6. We recommended that IHS work with PSC to develop procedures to enter overages in PMIS in a timely manner. In their response, HHS stated that the new Purchase Order interface between the Unified Financial Management System and the PMIS will reduce the number of inventory overages that are currently being recorded. In addition, the response stated that IHS can utilize the PSC on a fee basis to add their overages to PMIS. We agree that this new process is likely to help decrease the number of future overages needing to be recorded. However, IHS must still work with PSC to ensure that all current overages are added to PMIS in a timely manner. If you or your staffs have any questions concerning this report, please contact Gregory D. Kutz at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To determine whether property loss, property theft, and wasteful spending continues at the Indian Health Service (IHS) and to what extent IHS made progress in implementing our prior recommendations, we analyzed IHS documents that identified lost or stolen property from fiscal year 2008 through January 2009, reviewed IHS and Department of Health & Human Services (HHS) responses to our recommendations and updated policies and procedures, conducted a full physical inventory of property at IHS headquarters, and statistically tested information technology (IT) equipment inventory at six selected IHS field locations. To identify specific cases of lack of accountability, lost or stolen property, and wasteful spending, we analyzed IHS documents and made observations during our physical inventory and statistical tests. We evaluated IHS’s progress in implementing our previously reported recommendations by reviewing agency documentation and interviewing property management officials on actions taken in response to recommendations in our June 2008 report. To identify management actions taken in response to previously identified control weaknesses, we obtained and reviewed copies of new and revised IHS and HHS policies and procedures. We reviewed training certificates and property custodial designations, and randomly selected and tested for hand receipts on a limited number of assets at both IHS headquarters and some of the selected field locations. To determine if IHS physical inventory testing identified continuing weaknesses in property management, we obtained and reviewed information on IHS physical inventory results from all IHS headquarters, National Programs, and 12 IHS regions. We also performed a full physical inventory at IHS where we identified problems disclosed in our June 2008 report. Specifically, we tested all 1,518 headquarters property items—largely these items were IT equipment that IHS had recorded in its property records as of December 5, 2008. We physically observed each item and its related IHS-issued bar code and verified that the serial number related to the bar code was consistent with IHS property records. In addition, we selected a nonrepresentative sample of new purchases made in fiscal year 2008 for testing at IHS headquarters from documents provided by an IHS vendor and IHS officials. We tested each sample item by either (1) physically observing the asset or (2) obtaining a picture of the asset with a visible bar code and serial number. Although IHS property at the field locations includes inventory items such as medical equipment and heavy machinery, we performed a statistical test of only IT equipment inventory at six IHS field locations. We limited our scope to testing only IT equipment items that are highly pilferable or can be easily converted to personal use such as laptops, desktop computers, digital cameras and personal data assistants. We selected the six field locations based on the book value of inventory and geographic proximity to other testing locations. We retested five sites that we sampled last year and added the Aberdeen, South Dakota location because of the high dollar value of assets. Our findings at these six locations can not be generalized to IHS’s other locations. To estimate the extent of lost or stolen property at these six locations, we selected a simple random sample of 250 items from a population of 6,085 IT items valued at over $19 million. Because we followed a probability procedure based on random selections with each item having an equal chance of being selected, our sample is only one of a large number of samples that we might have drawn. Because each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. Based on this sample, we estimate the percentage missing or with other errors, the number of and the dollar amount of lost, stolen, or unaccounted for property for these six IHS locations. The following table summarizes the estimates used in this report along with their corresponding 95 percent confidence intervals. We considered equipment to be lost, stolen, or unaccounted for if (1) we could not physically observe the item during the inventory; (2) IHS could not provide us with a picture of the item, with a visible bar code and serial number, within 1 week of our initial request; or (3) IHS could not provide us with adequate documentation to support the disposal of the equipment. To evaluate IHS’s progress in implementing GAO’s recommendation that IHS maintain information on users of accountable property including their building and room numbers, we tested each asset for user and location accuracy for IHS headquarters and the random sample testing at the six field locations. Once an item was determined to exist in current inventory, we assessed whether the asset’s principal user and physical location matched what was recorded in the inventory property database. We also tested the inventory status accuracy in IHS’s property database. If adequate disposal documentation was provided for an asset, the asset was identified as an Inventory Status Error rather than missing. We performed appropriate data reliability procedures for our physical inventory testing at IHS headquarters and sample testing at the six field locations including (1) testing the existence of items in the database by observing physical existence of all items at the IHS headquarters and IT equipment selected in our sample; (2) testing the accuracy of the database by comparing user, location and inventory status; and (3) testing the completeness of the database by performing a 100 percent floor-to-book inventory at IHS headquarters and judgmentally selecting up to two items in the same or adjacent rooms of the randomly selected items tested for existence to determine if these items were maintained in IHS inventory records. Although our testing of the existence, accuracy, and completeness of IHS property records determined that IHS inventory records are neither accurate nor complete, we determined that the data were sufficient to perform these tests and project our results to the population of IT equipment. In addition, we interviewed IHS agency officials, property management staff, and other IHS employees. We also interviewed officials at the Program Support Center (PSC) and individuals from the HHS Office of Inspector General. Although we did not perform a systematic review of IHS internal controls, we identified key causes of lost and stolen property and wasteful spending at IHS by examining IHS and HHS policies and procedures, conducting interviews with IHS officials and our observations of property through our inventory testing. We conducted this forensic audit from October 2008 through March 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Our comments on the Department of Health & Human Services (HHS) letter dated May 1, 2009, follow. 1. Time spent responding to GAO requests and recommendations. In its response, HHS emphasizes a commitment to proper and accountable property management and stresses that IHS has spent thousands of hours responding to GAO requests and recommendations. We are pleased to see that IHS has begun devoting resources to addressing the chronic issue of lost and stolen property. As we reported in June of 2008, IHS property management problems date back to at least 1997. Given the chronic nature of the problem, IHS should be prepared to spend additional hours in the future, and should dedicate resources to enforcement and compliance when, and if, the significant challenges it faces have been resolved. 2. PMIS. In its response, IHS states that it is confident that implementation of its new property management system will eliminate “most, if not all, inventory currently unaccounted for.” This widespread property management problem will not simply be resolved as a result of implementing a new system. IHS must actively manage its property and enforce existing HHS property management policies, to include annual inventories, the issuance of hand receipts, physical security measures, and, critically, a commitment to holding IHS employees accountable for lost or stolen property. 3. Referrals from Prior Report. In our first report, we found evidence that an IHS property employee had fabricated documents and that a “yard sale” of IHS equipment had occurred in Nevada. We referred these incidents to the HHS Office of Inspector General (OIG) for further investigation. In its response to this report, HHS indicated that the OIG had concluded an investigation and that referrals of charges against IHS employees for fabrication of documents and a yard sale could not be substantiated. Concerning the allegations of fabricated documents, the OIG presented the case to the United States Attorney’s Office (USAO) for the District of Maryland—Southern District. Based on the lack of evidence, a criminal prosecution was declined by the USAO. We reported on this incident because an IHS property official admitted to GAO to fabricating documents in order to satisfy our request for the disposition of property. Concerning the yard sale, the OIG reported that no criminal activity was found to have occurred. We reported on this “yard sale” based on the confirmation of eight IHS property officials, including the Phoenix Area executive officer. Although criminal charges could not be substantiated in these cases, we believe that administrative action could still be warranted. We believe these cases are important because they represent opportunities for IHS to improve accountability for property management. 4. Physical Inventory. IHS stated that it completed an “agencywide 100 percent physical inventory.” We are pleased that IHS has conducted inventories at all locations as of the end of fiscal year 2008. However, conducting and completing inventories are separate matters. Specifically, reconciliation of missing items, to include the use of reports of survey to hold employees accountable for missing property, is the final step in completing an annual inventory. As of April 2009, IHS had not completed the reconciliation process. 5. Hand Receipts. We found that IHS had not fully implemented the use of hand receipts agencywide. In its response, IHS attributes this to the fact that it has issued a new policy and given Area Directors until September 2009 to have the policy fully implemented. Further, IHS mentions that “performance will be monitored in FY 2009.” The proper use and enforcement of hand receipts is a critical issue for IHS and it remains to be seen whether IHS will effectively manage its hand receipt program. 6. Determination of Lost, Stolen or Unaccounted For Property. We provided IHS with three options for proving that property was not lost, stolen, or otherwise unaccounted for during our field tests; these options were (1) direct physical observation; (2) for items not readily available for inspection, photographs with a visible bar code and serial number, to be provided within 1 week; and (3) for items represented as being disposed of, supporting documentation (e.g., disposal records). In response to our draft, HHS stated that these options were not sufficient because, “in many cases these items are temporarily unavailable to be inspected… because they are in use by employees who are out in the field.” We understand that IHS is a decentralized organization with numerous field locations. We believe that we provided a reasonable amount of time for a federal agency such as IHS to locate the items we selected, given its access to digital photography, mobile phones, and the Internet. The fact that IHS was unable to readily identify and provide support for the location of numerous items during our audit is consistent with the results of its 2008 annual inventory. 7. Physical Security of Property. We have identified physical security of property as an ongoing issue for IHS. We disagree with the statement that “IHS continues to safeguard all property.” We understand that property security was addressed in a memorandum from the IHS Director in November 2008, but without an enforcement mechanism to ensure that a policy or procedure is implemented and operating effectively, IHS has no assurance that it is safeguarding property effectively. Further, we did not systematically evaluate perimeter security, but we found examples where a lack of perimeter security facilitated a loss of property. For example, one Report of Survey indicated a trailer with an acquisition value of $7,300 was stolen from a Nashville Region Office parking lot when the security gates were broken and remained open. In another Report of Survey, a laptop was stolen from an employee’s workstation in Portland. The workstation was accessible to the public and was not secured. Further, we identified additional examples of unsecured equipment including an unsecured server room at one IHS area office. 8. Status of Prior Year Recommendations. In its response, HHS includes a table showing the status of progress made in implementing our prior year recommendations. However, the findings in our report contradict some of the statements made in this table: Regarding our recommendation to designate property custodial officers in writing, IHS states that “property custodial officers are designated by each Area Property Management Officer.” Designating property custodial officers in writing is important because it establishes clear responsibility and accountability for property. However, as discussed previously, we found that IHS designated property custodial officers in writing for some of the regions but there were still gaps of written designations at IHS headquarters and at 4 of the 12 regions. This creates uncertainty over property management responsibilities and fosters a lack of accountability at IHS headquarters and the 4 regions. Regarding our recommendation to enforce barcoding of accountable property, IHS states that all accountable and sensitive property items were reviewed and barcode tags affixed as part of the 2008 inventory. However, our inventory testing identified over 50 accountable items with no barcode at IHS headquarters. These tests began 2 months after the IHS 2008 annual inventory, indicating that IHS still faces challenges ensuring that all accountable property is affixed with barcodes. Regarding our prior recommendation to maintain information on the users of all accountable property, IHS states that it reviewed this information during the 2008 annual inventory and that “this is an ongoing process.” However, as previously discussed, we tested inventory at IHS headquarters and found user and location errors of 21 percent and 28 percent, respectively. These errors were much higher at the tested field locations, where we found errors of 87 and 89 percent, respectively. These tests were performed 2 months after the IHS annual inventory, indicating that IHS still faces significant challenges in keeping PMIS accurate. Regarding the comment that “this is an ongoing process,” we agree; for as long as IHS continues to purchase property, enter it into PMIS, and assign it to staff, property managers must remain vigilant to ensure that records are accurate. In addition to the contact named above, Cindy Brown Barnes, Assistant Director; John Ahern, Donald Brown, Arturo Cornejo, Jennifer Costello, Paul Desaulniers, Dennis Fauber, Heather Hill, Christopher Howard, Elizabeth Isom, Leslie Kirsch, Barbara Lewis, Andrew McIntosh, Sandra Moore, James Murphy, Andy O’Connell, George Ogilvie, Lerone Reid, Phil Reiff, Verginie Tarpinian, and Emily Wold made key contributions to this report. | In 2008, GAO issued a report and testimony revealing gross mismanagement of property at the Indian Health Service (IHS). GAO found that 5,000 items with an acquisition value of $15.8 million were reported lost or stolen for fiscal years 2004 through 2007. GAO attributed the property mismanagement and waste to weak internal controls. GAO made 10 recommendations to IHS. IHS ultimately agreed to implement all 10 recommendations. Given the extent and seriousness of the property management problems at IHS, GAO was asked to determine (1) whether property loss, property theft, and wasteful spending continue at IHS; and (2) to what extent IHS made progress in implementing GAO's prior recommendations. GAO analyzed IHS property records from fiscal year 2008 through January 2009, conducted a full physical inventory at IHS headquarters, and performed a probability sample of information technology equipment inventory at six IHS field locations. GAO also examined IHS policies, analyzed documents, and conducted interviews with IHS officials. IHS continues to lose property at an alarming rate, reporting lost or stolen property with an acquisition value of about $3.5 million dollars in little over a year, including new medical equipment. IHS management's failure to implement most of our June 2008 recommendations and hold staff accountable for losses contributes significantly to ongoing property problems. These property losses at IHS are in addition to what GAO identified in its June 2008 report. GAO completed a full audit of IHS headquarters and found that 126 items worth $216,000 (or 8 percent of the items tested) had been lost, stolen, or were otherwise unaccounted for. GAO also estimates that about 800 equipment items at six field locations with an acquisition value of about $1.7 million were lost, stolen, or unaccounted for. Furthermore, although IHS performed an annual inventory as GAO recommended, as of March 2009, it had not finished reconciling the inventory and cannot locate many items, including medical equipment. These items include a 2002 ultrasound unit valued at $170,000; a 2003 X-ray mammography machine valued at $100,795; dental chairs, cardiac and vital sign monitors; and a pharmacy tablet counter machine. Aside from issuing a memorandum from the IHS Director that restated and refined existing policies, IHS has taken little action to ensure that employees are aware of and complying with property policies. One way to enforce policies involves holding individuals accountable; however, GAO found that the Senior Service Executive in charge of the IHS property group and other areas was given a $13,000 bonus after GAO's report exposed mismanagement of property under the executive's purview. Furthermore, IHS could only provide one example of an individual held financially liable for lost or stolen property over a 1-year period; but at the time of our audit, the individual still had not reimbursed the government for the loss. GAO also identified the following examples where IHS investigated the loss of property but did not hold anyone accountable. |
The federal government sets broad federal requirements for Medicaid— such as requiring that state Medicaid programs cover certain populations and benefits—and matches state Medicaid expenditures with federal funds for most services. States administer their respective Medicaid programs on a day-to-day basis, and have the flexibility to, among other things, establish provider payment rates and cover many types of optional benefits and populations. Section 1115 demonstrations provide a further way for states to innovate in ways that fall outside of many of Medicaid’s otherwise applicable requirements, and to receive federal matching Medicaid funds for costs that would not otherwise be matchable. For example, states may use these demonstrations to test new approaches for delivering care to generate savings or efficiencies or improve quality and access. Such changes have included expanding benefits to cover populations that would not otherwise be eligible for Medicaid, altering the state’s Medicaid benefit package, or financing payment pools, for example, for state-operated health programs or supplemental provider payments. Demonstrations are typically approved for an initial 5-year period that can be renewed for future demonstration periods. Some states have operated some or all of their Medicaid programs for decades under section 1115 demonstrations. Each demonstration is governed by STCs, which reflect the agreement between CMS and the state. The STCs include any provisions governing spending under the demonstration. For example, STCs indicate for what populations and services funds can be spent. In states receiving approval to implement payment pools for state health programs and supplemental provider payments, the STCs could include parameters for payments under those pools. For example, they may require that payment pools be capped at certain levels. The STCs may also include criteria for providers to receive payments and protocols that states must have to ensure the appropriateness of the payments and allow CMS to review those payments. The STCs also include the limits on the amount of federal funds that can be spent on the demonstration—referred to as spending limits—and indicate how spending limits will be enforced. Finally, the STCs include the reporting requirements the state must meet. Reporting requirements—as contained in the STCs—may include regular telephone calls between the state and CMS, regular performance reports, and quarterly expenditure reports. The STCs outline what the state should include in each of these reports, which can vary by demonstration. CMS policy requires that section 1115 demonstrations be budget neutral to the federal government—that is, the federal government should spend no more under a state’s demonstration than it would have spent without the demonstration. Once approved, each demonstration operates under a negotiated budget neutrality agreement, documented in its STCs, that places a limit on federal Medicaid spending over the life of the demonstration. This limit is referred to as the spending limit. If a state exceeds the demonstration spending limit at the end of the demonstration period, it must return the excess federal funds. Spending limits can be a per person limit that sets a dollar limit for each Medicaid enrollee included in the demonstration in each month, a set dollar amount for the entire demonstration period regardless of the level of enrollment, or a combination of both. Spending limits are calculated by establishing a spending base and applying a rate of growth over the period of the demonstration. The spending base generally reflects a recent year of state expenditures for populations included in the demonstration, and the growth rate to be applied is generally based on the lower of a state-specific historical growth rate or a federal nationwide estimate. Different data elements may be required by CMS to assess a state’s compliance with the spending limit. For example, for a per person spending limit, which is generally a defined dollar limit per enrollee per month, CMS needs both expenditure and enrollment data to assess compliance with the spending limit. CMS is responsible for monitoring compliance with the STCs during the demonstration, including compliance with requirements around how Medicaid funds can be spent and spending limits. Monitoring efforts may include reviewing performance reports and quarterly financial reporting required under the STCs and discussing questions and concerns with the state. When a state seeks a renewal of a demonstration, that request offers CMS an opportunity to negotiate revisions to the STCs with the state, which could include changes to spending limits and reporting requirements. (See fig. 1.) States are required to report Medicaid expenditures, including expenditures under demonstrations, to CMS at the end of each quarter. CMS reviews these expenditures on a quarterly basis for reasonableness. If, during the expenditure review, CMS is uncertain as to whether a particular state expenditure is allowable, then CMS may withhold payment pending further review (referred to as a deferral). With regard to reporting on expenditures under demonstrations, the STCs dictate the level of detail that the state is required to include in the quarterly expenditure reporting. For example, they might require the state to report expenditures by population and by payment pool approved under the demonstration. Federal spending under section 1115 Medicaid demonstrations increased significantly from fiscal year 2005 through fiscal year 2015, rising from $29 billion in 2005 to over $100 billion in 2015. Federal spending on demonstrations also increased as a share of total federal Medicaid spending during the same period, rising from 14 percent of all federal Medicaid spending in fiscal year 2005 to 33 percent in fiscal year 2015. (See fig. 2.) Several factors likely contributed to these trends. First, the number of states with demonstrations increased during this period, with 31 states reporting demonstration expenditures in fiscal year 2005 and 40 reporting such expenditures in fiscal year 2015. Second, some states expanded their demonstrations over this period, with demonstration spending in 24 states representing a greater proportion of total Medicaid spending in fiscal year 2015 than in fiscal year 2005. For example, CMS officials told us that, during this period, some states shifted expenditures for managed care and home and community based services from other Medicaid authorities to section 1115 demonstrations. In addition, during 2010 through 2015, a number of states expanded coverage through demonstrations to low-income adults, which, as CMS officials told us, likely contributed to the increase in demonstration spending. Demonstration spending as a proportion of total federal Medicaid spending varied across states and represented most—75 percent or more—of Medicaid spending in 10 of the 40 states that reported expenditures in fiscal year 2015. (See fig. 3.) Further, in 5 of these 10 states, demonstration spending represented more than 90 percent of the state’s total federal Medicaid spending. In contrast, in fiscal year 2005, spending under demonstrations did not exceed 75 percent of total Medicaid spending in any state. In that year, demonstration spending represented between 25 percent and 75 percent of total Medicaid spending in 10 states and less than 25 percent in 21 states. (See app. I.) The extent to which demonstration spending changed over time varied across states, as illustrated by the most recent 5 years of spending data in our selected states. In two of our four selected states—California and Indiana—spending under demonstrations increased between fiscal years 2011 and 2015, consistent with the national trend. (See table 1.) California’s demonstration spending increased the most significantly— more than tripling—during this time frame, during which the state expanded its demonstration to, among other things, provide coverage to low-income adults. Indiana reported a 22 percent increase in demonstration spending between fiscal years 2011 and 2015. In contrast, Tennessee reported a 3 percent decrease during that period. With regard to the change in the proportion of total Medicaid spending that demonstrations represented, the proportion did not change between 2011 and 2015 for Indiana and Tennessee and doubled in California, from a quarter of its total Medicaid expenditures in 2011 to half of its total Medicaid expenditures in 2015. We could not assess the change in spending for the fourth state—New York—because the state’s expenditure reporting for fiscal year 2015 was incomplete. We found that CMS took a number of steps to monitor demonstration spending in our selected states. For example, CMS held calls with states and performed various steps to assess the appropriateness of expenditures. Held monitoring calls. CMS and state officials told us that they held monitoring telephone calls to discuss any significant current or expected developments in the demonstrations. CMS officials confirmed they may use the calls to obtain information to supplement their review of states’ performance and expenditure reports. For example, CMS officials said they used the calls with Tennessee to raise questions about the content of the state’s submitted quarterly reports. In addition, California officials told us that CMS used these calls to get updates and supporting documentation on state programs. Checked for the appropriateness of expenditures. In reviewing the quarterly expenditure reports, CMS officials told us that they assessed the appropriateness of expenditures. For example, the agency checked that the amounts claimed complied with federal requirements for matching funds. As a result of these checks, CMS issued several deferrals to withhold payment of federal funds to California until the state could account for expenditures claimed. Officials also told us that as part of the checks, they assessed the appropriateness of pool payments, such as those for supplemental payments to providers, where relevant. Assessing the appropriateness of pool payments involves ensuring that pool payments align with the approved purposes of the pool and that the payments were made to approved providers. For example, CMS officials told us that for one of the pools in the New York demonstration, agency staff checked whether the payments made were to eligible providers, the requirements of which were described in the STCs. As a result of this review, CMS deferred providing over $38 million in federal funds to New York for payments made to providers under the pool in the quarter ending March 31, 2016, until the state could provide documentation that the providers were eligible to receive payment. CMS officials also told us that they checked to ensure that the state was not receiving funds from other federal funding sources that are intended to serve the same purposes as funds in their payment pools (i.e., duplicating federal funds), and that the state’s share of funding for the pools is from permissible sources, such as the state’s general revenue. According to CMS officials, as a result of the agency’s checks of spending for New York’s demonstration, CMS identified $172 million in federal funds that were inappropriately used to finance the state share of demonstration costs. CMS recovered these funds in fiscal year 2015. However, we also found inconsistencies in CMS’s monitoring process that potentially limited the effectiveness of the agency’s monitoring efforts in the selected states. The inconsistencies included the following: Reporting requirements were sometimes insufficient to provide information needed to assess compliance with spending limits. CMS did not consistently require states to report the elements needed for the agency review staff to compare actual demonstration spending to the spending limit. For example, although CMS needs states to report the number of enrollees per month—referred to as member months—to assess compliance with per person spending limits, the agency only required such reporting for two of the four selected states’ demonstrations. CMS acknowledged that having member month data is important to assess spending limit compliance. For example, CMS did not require California to report enrolled member months for its demonstration from 2010 to 2015, but the agency amended the STCs to include this requirement when the state’s demonstration was renewed beginning in 2016. Including this requirement will prevent CMS from having to use alternative means to gain necessary information for this compliance assessment. For example, CMS officials said that they have used monitoring calls to obtain the missing enrollment information from the state. Enforcement of expenditure reporting requirements was inconsistent. We found that the selected states did not report demonstration expenditures in all of the categories specified under their demonstration STCs. For example, California’s expenditure reporting did not align with the STC reporting requirements for 2010 through 2015. California officials told us this was largely because CMS had not enforced the reporting requirements prior to 2015. Therefore, based on our review, CMS would not be able to assess compliance with the spending limit for California using the data included in its expenditure report, if CMS tried to do so. Monitoring compliance with spending limits was inconsistent. CMS did not consistently assess compliance with the spending limit in all our selected states. CMS officials told us that they assessed compliance with the spending limits on a quarterly basis for the demonstrations in Tennessee and Indiana. However, the agency did not regularly assess compliance for the California and New York demonstrations—which represented tens of billions of dollars in federal spending annually—due to limitations in the state-reported expenditure data. CMS officials told us that they did not assess California’s compliance with the spending limit because the expenditure data submitted by the state was not accurate. Furthermore, the agency’s focus was on resolving a number of broader financial compliance issues in the state (see sidebar), the resolution of which, according to officials, was necessary before the agency could assess compliance with the spending limit. With regard to New York, CMS had not assessed compliance with the spending limit since 2011, because the state’s reporting of expenditures has been significantly incomplete since then. According to CMS officials, significant staff transitions disrupted New York’s ability to report expenditures to CMS as required. The state delayed reporting expenditures, and it did not report them in the categories specified in the STCs. Although CMS did not assess compliance with the spending limit for either of these two states, officials told us that they were not concerned that California or New York exceeded their spending limits because the limits in those states have historically been higher than actual spending. These inconsistencies may have resulted, in part, from CMS’s lack of written, standard operating procedures for monitoring spending under demonstrations. For example, CMS does not have internal guidance on the elements that must be included in reporting requirements for states. In addition, regarding the state performance reports, CMS does not have a review protocol or a requirement that staff check that reports contain the elements required by the STCs, for example, enrollment data needed to assess a state’s compliance with the spending limit. CMS has written materials to train staff on how spending limits are set and how demonstration spending is monitored. However, these materials are limited to high-level descriptions of the monitoring roles and do not contain specific procedures for staff to use in monitoring. Regarding the review of quarterly expenditure reports, CMS has guidance for agency staff who review them, but the guidance lacks detailed direction on what checks of demonstration expenditure data should occur. CMS also lacks standard procedures for documenting its monitoring efforts. For example, the agency has no written requirements for its staff to document that required performance reports have been submitted by the states. Furthermore, the agency does not require its staff to document the content of monitoring calls, including any concerns and potential resolutions discussed. In addition, CMS does not require its staff to systematically document checks performed for state compliance with demonstration spending limits or the appropriateness of pool payments. According to CMS officials, while there are not written requirements to do so, there is an expectation that staff maintain documentation of their monitoring efforts. However, officials also told us that any documentation of checks that a demonstration complied with its spending limits is likely included in the personal notes of individual CMS staff. As such, they may not necessarily be accessible to all staff who have oversight responsibility of the demonstration. One example of evidence we observed of CMS documenting its monitoring efforts was when checks for appropriateness of expenditures resulted in a deferral of federal funds, which were documented in letters to the states. CMS officials told us that they are in the early stages of developing standard operating procedures and a management information system to better standardize the monitoring of demonstrations: Standard operating procedures. CMS officials told us that they are developing protocols for monitoring state demonstration programs and state compliance with demonstration spending limits. Officials told us that the protocols would outline staff roles and responsibilities. Officials also told us that they are working on standardizing the format and content of required state performance reports, which could help ensure that CMS is receiving the information needed to monitor spending under the demonstration. As of December 2016, CMS officials expected that the first phase of standard procedures, which will focus on assessments of compliance with the spending limit, will be developed and documented in the next year. They explained that developing the procedures is an iterative process and that it could take the agency 2 years to completely develop and document its plans. Management information system. CMS officials also told us that they are in the initial phases of building a management information system to facilitate and document demonstration oversight. The first part of the system, which was in use as of September 2016, allowed CMS to centralize the collection of state demonstration performance reports. In future phases of system development, officials told us that the system will include alerts for missing reports or incomplete reviews and prompts for CMS’s staff to document completion of monitoring checks. CMS also plans for the system to include a database of demonstration STCs that CMS staff can search, which could help to ensure that STCs consistently include necessary reporting requirements. It is too early to determine how well CMS’s planned standard operating procedures and management information system will address the inconsistencies in its demonstration monitoring process. CMS officials did not have any written documentation regarding the agency’s plans as of December 2016. As such, it was unclear, for example, whether the procedures and new system would include mechanisms to ensure that STCs consistently require states to report the information needed for CMS to assess compliance with the spending limits. In addition, it was unclear if the procedures or new system would ensure that agency staff regularly check that expenditure reporting complies with reporting requirements. CMS officials said they intend for the procedures and new system to include mechanisms to ensure consistency in those areas. Federal internal control standards require that federal agencies design control activities to achieve objectives and respond to risks, and that agencies implement control activities, including documenting the responsibilities for these activities through policies and procedures. Without standard procedures for monitoring demonstration spending and documenting those efforts, CMS faces the risk of continued inconsistencies in monitoring and the risk that it may not identify cases where states may be inappropriately using federal funds or exceeding spending limits. CMS’s policy for applying demonstration spending limits has allowed our selected states to accrue unused spending authority under the demonstration spending limit (referred to in this report as unspent funds) and use it to expand demonstrations to include new costs. According to CMS officials, under long-standing policy, if a state spends less than the spending limit, the agency allows the state to accrue the difference between actual expenditures and the spending limit and carry forward the unspent funds into future demonstration periods. CMS allowed our selected states to use unspent funds to expand the demonstration by, for example, financing care for additional eligibility groups or additional supplemental payments. For example, according to CMS officials, Indiana accrued $600 million in unspent state and federal funds during the first demonstration period and was using a portion of that—approximately $2 million a year—to finance care for a small group of beneficiaries with end- stage renal disease in a subsequent period of the demonstration. CMS allowed New York to use $8 billion in accrued unspent federal funds from previous demonstration periods to expand its demonstration by including a new supplemental payment pool for incentive payments to Medicaid providers, costs that would not have been eligible for federal matching funds outside of the demonstration. If a state were to exceed its spending limit in a demonstration period, the agency allows it to draw upon unspent funds from previous demonstration periods to cover demonstration expenses, which, according to CMS officials, is consistent with the budget neutrality policy, under which spending limits are enforced over the life of the demonstration including any extensions beyond the initial 5-year term. The flexibility afforded to states in their accrual and use of unspent funds may explain, in part, why CMS has infrequently found that states exceed spending limits. Agency officials told us the agency has only withheld federal funds once as a result of a state exceeding its spending limit. Specifically, in 2007, CMS found that Wisconsin exceeded its demonstration spending limit and required the state to return $10.2 million to the federal government. Among our selected states, we found that states could accrue significant amounts of unspent funds. For example, CMS officials estimated that New York and California accrued billions of dollars in unspent funds. Based on our analysis, we found that Tennessee accrued approximately $11.6 billion in unspent funds over 3 years. (See fig. 4) According to CMS officials, growth in health care costs has proven lower than the agency and states assumed when setting the spending limits, resulting in spending that consistently falls below spending limits across demonstrations. In past work, we found that HHS had approved spending limits that were higher than the budget neutrality policy suggested. Among other concerns, we reported that HHS allows methods for establishing the spending limit that may be inappropriate—including application of inappropriately high growth rates—and may result in excessively high spending limits. For example, we found four demonstrations where the spending limits were a total of $32 billion higher than they should have been for the demonstration periods, typically 5 years. In May 2016, CMS communicated to states that the budget neutrality policy had been revised to, among other things, restrict the accrual of unspent funds to better control demonstration costs. Specifically, for demonstrations renewed starting in January 2016, CMS restricts the amount of unspent funds states can accrue over time in two ways. First, when states apply to renew their demonstrations, they can only carry over unspent funds from the past 5 years of the demonstration. Second, for demonstrations renewed through 2021, CMS limits the amount of unspent funds states can accrue each year in the renewal period. Specifically, after a state’s initial 5-year demonstration period, the amount of expected unspent funds that a state can accrue is reduced by 10 percent per year, until states can only accrue 25 percent of expected unspent funds under the spending limit. For example, a state renewing its demonstration after completing its first 5-year demonstration period would be able to keep 90 percent of the unspent funds it would accrue under the spending limit in the sixth year of the demonstration—which is the first year of the renewal period—80 percent in the seventh year, and so on. States that had renewed previously would experience further restrictions until the 13th year of the demonstration, at which point a state would be limited to accruing 25 percent a year. For demonstration renewals starting in 2021, states will still be limited to carrying over 5 years of unspent funds, but the percentage restrictions will be replaced with different requirements that could lower spending limits. Specifically, CMS will require states to submit new cost estimates using recent cost data (i.e., to rebase their cost projections). Those new cost projections, subject to adjustment, would become the basis of spending limits for the renewal period. To the extent that using more recent cost data results in spending limits that more closely reflect actual costs— which have proven lower than assumed by states and CMS when setting the spending limit—this requirement may lower spending limits and accordingly may reduce the unspent funds that states accrue under those limits. As of mid-December 2016, CMS was in the early stages of implementing the restrictions, having approved six demonstration renewals under the revised policy—those for Arizona, California, Massachusetts, New York, Tennessee, and Vermont. The updated STCs for the new demonstration periods in each state limited the states’ access to unspent funds to the last 5 years and reduced the amount of unspent funds the states can accrue over the next 5 years. For example, under the revised policy, California’s expected accrued unspent funds over its next demonstration period will be reduced by approximately $15 billion. (See fig. 5.) The effectiveness of the revised policy in controlling costs will depend, in part, on whether CMS consistently implements the revisions. We found two weaknesses that could lead to inconsistent application. Lack of formal guidance. CMS released a slide presentation on the revised policy during a teleconference with all states but has not issued formal guidance. CMS made the slides available on the agency’s website, but they were not included in the database of guidance—typically letters—for state Medicaid directors. CMS officials told us that there was no plan to issue additional guidance to states. Although the slides detail how unspent funds will be reduced, without formal guidance, it is unclear whether CMS will consistently apply these new requirements during demonstration renewals. Inconsistent tracking of unspent funds. We found that CMS was not consistently tracking unspent funds under the spending limits in our selected states, which makes it difficult for CMS to ensure the unspent funds are reduced by the amount specified under the new policy. For example, New York had not provided the financial reporting CMS needed to calculate the state’s actual costs for the different eligibility groups covered by the demonstration or its accrued unspent funds, even though there were specific spending limits for these different groups. As a result, CMS could not track unspent funds in the state. CMS officials told us the agency required New York to produce that information as part of the application for renewing the state’s demonstration. Similarly, the agency did not have actual costs for California’s demonstration, given California’s lack of reporting as specified under the STCs, and required the state to provide that information under its renewed demonstration. CMS officials told us the standard operating procedures, as noted above, that the agency is developing for monitoring demonstrations will reflect the revisions to CMS’s budget neutrality policy. It is too soon to determine if these procedures will ensure consistent tracking of unspent funds because, as we noted earlier, there was no documentation of the agency’s plans for these procedures as of December 2016. Federal internal control standards require that federal agencies should design control activities to achieve objectives. Control activities like formal guidance and standard procedures that clarify the application of agency policies help ensure that those policies—such as the revised budget neutrality policy—are consistently carried out in achieving cost control objectives. Without addressing potential weaknesses, including the lack of formal guidance and the lack of consistent tracking of unspent funds across all demonstrations, CMS may not be able to effectively implement the policy and achieve its related cost-control objectives. Medicaid section 1115 demonstrations are an important tool for states to test new approaches to delivering care that, among other things, may be more cost effective. However, the growing federal expenditures for demonstrations—now at over $100 billion a year—for costs that, in some cases, would not otherwise be eligible for Medicaid funding makes monitoring of those dollars critical. While our work found that CMS was monitoring demonstration spending in our selected states, the agency’s process also raised concerns. CMS’s lack of standard procedures for its monitoring process has contributed to insufficient reporting requirements for states and inconsistent enforcement of those requirements. Insufficient reporting can create a barrier to monitoring efforts, including assessing compliance with spending limits. Inconsistent enforcement might allow compliance issues to go undetected for extended periods of time, which, as demonstrated by the issues in California and New York, can take years to resolve. A key principle for demonstrations has long been the policy that they must be budget neutral to the federal government. Whether demonstrations adhere to that principle depends both on how CMS approves and applies spending limits during the demonstration. We have raised concerns in the past about demonstration approvals, including that in some cases spending limits for demonstrations were set too high. Our current work found that as a result of high spending limits, states are accruing significant amounts of unspent funds under the spending limits and using those funds to finance expansions of the demonstration. CMS’s move under the revised budget neutrality policy to begin restricting the amount of unspent funds that states can accrue is a positive step toward the agency’s goal of better controlling demonstration costs. However, states may continue to accrue significant amounts of unspent funds. Without standard procedures for tracking these funds, CMS will not be able to effectively enforce the limits on those funds. Further, without formal guidance on the revised policy, it is unclear whether CMS will consistently apply the policy. To improve consistency in CMS oversight of federal spending under section 1115 demonstrations, we recommend that the Secretary of Health and Human Services require the Administrator of CMS to take the following two actions: 1. Develop and document standard operating procedures for monitoring a. Require setting reporting requirements for states that provide CMS the data elements needed for CMS to assess compliance with demonstration spending limits; b. Require consistent enforcement of states’ compliance with financial reporting requirements; and c. Require consistent tracking of the amount of unspent funds under demonstration spending limits. 2. Issue formal guidance on the revised budget neutrality policy, including information on how the policy will be applied. We provided a draft of this report to HHS for review and comment. HHS concurred with our first recommendation that the agency should develop and document standard operating procedures for monitoring demonstration spending. In its response to this recommendation, HHS added that the department is developing infrastructure and procedures to better support demonstration monitoring. HHS did not explicitly agree or disagree with the second recommendation that the agency should issue formal guidance on the revised budget neutrality policy and how it will be applied. In its response to this recommendation, HHS noted that the new policy is being incorporated into new budget neutrality workbook templates and monitoring procedures, which will be used by the states and reviewers. The agency stated that it will determine if additional guidance is needed as implementation continues. Given the importance of this policy in controlling demonstration costs, we believe that developing formal guidance is necessary to ensure consistent application. HHS also provided technical comments, which we incorporated as appropriate. HHS’s comments are reprinted in appendix II. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, appropriate congressional committees, and other interested parties. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Total Medicaid expenditures (millions) Expenditures for demonstrations (millions) Total Medicaid expenditures (millions) Expenditures for demonstrations (millions) Total Medicaid expenditures (millions) Expenditures for demonstrations (millions) Total Medicaid expenditures (millions) Expenditures for demonstrations (millions) In addition to the contact named above, Susan Barnidge (Assistant Director), Jasleen Modi (Analyst-in-Charge), Shamonda Braithwaite, Elizabeth Miller, and Giao N. Nguyen made key contributions to this report. Also contributing were Giselle Hicks, Laurie Pachter, and Emily Wilson. | As of November 2016, 37 states had demonstrations under section 1115 of the Social Security Act, under which the Secretary of HHS may allow costs that Medicaid would not otherwise cover for state projects that are likely to promote Medicaid objectives. By policy, demonstrations must be budget neutral; that is, the federal government should spend no more for a state's Medicaid program than it would have spent without the demonstration. CMS is responsible for monitoring spending and assessing compliance with demonstration terms and conditions for how funds can be spent and applying spending limits to maintain budget neutrality. GAO was asked to examine federal spending for demonstrations and CMS's oversight of spending. This report examines (1) federal spending over time, (2) CMS's monitoring process, and (3) CMS's application of spending limits. GAO reviewed federal expenditure data for fiscal years 2005-2015, relevant documentation for 4 states, selected based on variation among their demonstrations, and federal internal control standards, and also interviewed CMS and state Medicaid officials. Over the last decade, federal spending under Medicaid section 1115 demonstrations, which allow states flexibility to test new approaches for delivering Medicaid services, has increased significantly. The Centers for Medicare & Medicaid Services (CMS), within the Department of Health and Human Services (HHS), took a number of steps to monitor demonstration spending in GAO's 4 selected states. However, GAO also found inconsistencies in CMS's monitoring process. For example, CMS did not consistently require selected states to report the information needed to assess compliance with demonstration spending limits. The inconsistencies may have resulted from a lack of written standard procedures. CMS officials told GAO that CMS was developing procedures to better standardize monitoring, but did not have detailed plans for doing so. Thus, it is too soon to determine whether these efforts will address the inconsistencies GAO found. Federal standards require that federal agencies design control activities to achieve objectives. Without standard, documented procedures, CMS may not identify cases where states are inappropriately using federal funds or exceeding spending limits. In applying demonstration spending limits, CMS allowed states to accrue unspent funds (more specifically, unused spending authority) when state spending is below the limit and use them to finance expansions of the original demonstration. For example, CMS allowed New York to use $8 billion in unspent federal funds to expand its demonstration to include an incentive payment pool for Medicaid providers. In May 2016, CMS released a slide presentation outlining new restrictions on the accrual of unspent funds. Per federal standards, formal guidance helps ensure that policies are consistently carried out. However, CMS has not issued formal guidance on the policy and does not consistently track unspent funds under the spending limit, raising questions as to whether the revised policy will be effective in better controlling costs. GAO recommends that CMS (1) develop and document standard operating procedures for sufficient reporting requirements and to require consistent monitoring and (2) issue formal guidance on its revised policy for restricting accrual of unspent funds. HHS agreed with GAO's first recommendation and neither agreed nor disagreed with GAO's second recommendation. |
VBA is in the process of modernizing many of its older, inefficient systems and has reportedly spent an estimated $294 million on these activities between October 1, 1986 and February 29, 1996. The modernization program can have a major impact on the efficiency and accuracy with which over $20 billion in benefits and other services is paid to our nation’s veterans and their dependents. However, in the last 6 years some aspects of VBA’s service to the veterans have not improved. For example, in the past 6 years, VBA’s reported processing time for an original compensation claim rose from 151 days in fiscal year 1990 to 212 days in fiscal year 1994. In March 1996 the average time was 156 days. Software development is a critical component of this major modernization initiative. VBA, with the assistance of contractors, will be developing software for the Veterans Services Network (VETSNET) initiative, a replacement for the existing Benefit Delivery Network. For efforts like VETSNET to succeed, it is crucial that VBA have in place a disciplined set of software development processes to produce high quality software within budget and on schedule. VBA relies upon its own staff and contractors to develop and maintain software that is crucial to its overall operations. In fiscal year 1995, VBA had 314 full-time equivalents, with payroll expenses of $20.8 million, devoted to developing and maintaining software throughout the organization. It also spent $17.7 million in contract services in these areas. To evaluate VA’s software development capability, version 2.0 of the Software Engineering Institute’s (SEI) software capability evaluation (SCE) method was used by an SEI-trained team of GAO specialists. The SCE is a method for evaluating agencies’ and contractors’ software development processes against SEI’s five-level software Capability Maturity Model (CMM), as shown in table 1. These levels and the key process areas (KPAs) described within each level define an organization’s ability to develop software, and can be used to improve its software development processes. The findings generated from an SCE identify (1) process strengths that mitigate risks, (2) process weaknesses that increase risks, and (3) improvement activities that indicate potential mitigation of risks. We requested that VA identify for our evaluation those projects using the best software development processes implemented within VBA and AAC. VBA and AAC identified the following sites and projects. —Compensation & Pension/Financial Management System —Claims Processing System We evaluated the software development processes used on these projects, focusing on KPAs necessary to achieve a repeatable capability. Organizations that have a repeatable software development process have been able to significantly improve their productivity and return on investment. In contrast, organizations that have not developed the process discipline necessary to better manage and control their projects at the repeatable level incur greater risk of schedule delay, cost overruns, and poor quality software. These organizations rely solely upon the variable capabilities of individuals, rather than on institutionalized processes considered basic to software development. According to SEI, processes for a repeatable capability (i.e., CMM level 2) are considered the most basic in establishing discipline and control in software development and are crucial steps for any project to mitigate risks associated with cost, schedule, and quality. As shown in table 2, these processes include (1) requirements management, (2) software project planning, (3) software project tracking and oversight, (4) software subcontract management, (5) software quality assurance, and (6) software configuration management. We conducted our review between August 1995 and February 1996, in accordance with generally accepted government auditing standards. Highlights of our evaluation of VBA’s software practices using the SEI criteria outlined in appendix II follow. Requirements Management - The purpose of requirements management is to establish a common understanding between the customer and the software project of the customer’s requirements that will be addressed by the software project. The first goal within this KPA states that, “system requirements allocated to software are controlled to establish a baseline for software engineering and management use.” VBA does not manage and control system requirements as required by this goal. Moreover, members of software-related groups are not trained in requirements management activities. Also, changes made to software plans, work products, and activities resulting from changes to the software requirements are not assessed for risk. Software Project Planning - The purpose of software project planning is to establish reasonable plans for performing the software engineering and for managing the software project. VBA projects do not have software development plans, estimates for software project costs are not derived using conventional industry methods and tools, and VBA is unable to show the derivation of the estimates for the size (or changes to the size) of the software work products. Also, individuals involved in the software project planning are not trained in estimating and planning procedures applicable to their area of responsibility. Software Project Tracking and Oversight - The purpose of software project tracking and oversight is to provide adequate visibility into actual progress so that management can take effective actions when the software project’s performance deviates significantly from software plans. VBA does track software project schedules against major milestones; however, as mentioned previously, these schedules and milestones are not derived using conventional industry methods nor is there a comprehensive software plan against which to track activities. Moreover, the size of software work products (or the size of changes to software work products) are not tracked, and the software risks associated with cost, resource, schedule, and technical aspects of the project are not tracked. Software Subcontract Management - The purpose of software subcontract management is to select qualified software subcontractors and manage them effectively. VBA does not have a written organizational policy that describes the process for managing software contracts. Additionally, the software work to be contracted is neither defined nor planned according to a documented procedure. Finally, software managers and other individuals who are involved in developing, negotiating, and managing a software contract are not trained to perform these activities. Software Quality Assurance - The purpose of software quality assurance is to provide management with appropriate visibility into the process being used by the software project and of the products being built. VBA has a software quality and control (SQ&C) group that has a reporting channel to senior management, independent of the project managers. The SQ&C group also performs testing of the software code. However, the SQ&C group does not participate in other software quality assurance (SQA) functions, such as the preparation, review, and audit of projects’ software development plans, standards, procedures, and other work products. Also, projects do not have SQA plans. Software Configuration Management - The purpose of software configuration management is to establish and maintain the integrity of products of the software project throughout the project’s software life cycle. VBA has provided formal training to its staff in defining software processes. However, VBA cannot effectively control the integrity of its software work products because it has no software configuration control board, it does not identify software work products to be placed under configuration management, and it has no configuration management library system to serve as a repository for software work products. VBA has begun improvement activities in this area by (1) establishing a software configuration management group and (2) drafting a software configuration management procedure. Following a presentation of GAO’s SCE results to the Chief Information Officer of VBA, the Director of VBA’s Office of Information Systems forwarded a letter to GAO citing a number of initiatives that are currently underway to address some of the stated deficiencies. Initiatives cited by the VBA include: development and distribution of interim configuration management procedures; identification of a library structure to hold all of the work products from the development process; and initiation of several meetings with SEI to discuss the Software CMM. Similar to VBA, we compared the CMM criteria in appendix II to the software development practices at AAC. Summary results of this evaluation follow. Requirements Management - AAC does not create or control a requirements baseline for software engineering. Also, AAC does not manage or control requirements. AAC does have a process for negotiating periodic contractual arrangements with customers, but this process does not include baselining and controlling software requirements. Software Project Planning - Although AAC documents its schedule estimates for software development projects, there is (1) no defined methodology in use for estimating software costs, size, or schedule, (2) no derivation of estimates for the size (or changes to the size) of software products, and (3) no derivation of the estimates for software project costs. Similarly, AAC uses a project planning tool called “MultiTrak”. However, projects do not have software development plans. Software Project Tracking and Oversight - AAC performs schedule tracking at major milestones. However, the goals for this KPA call for (1) the tracking of actual results and performances against software plans, (2) the management of corrective actions when deviations from the software plan occur, and (3) the affected parties to mutually agree to changes in commitments. AAC does not conform to these goals. For example, AAC does not track (1) the software risks associated with cost, resource, schedule, and technical aspects of the project and (2) the size of software work products (or size of changes to software work products). Software Subcontract Management - Although the goals for this KPA emphasize the selection of qualified software subcontractors and managing them effectively, AAC does not (1) have a documented procedure that explains how the work to be contracted should be defined and planned and (2) ensure that software managers and other individuals who are involved in establishing a software contract are trained to perform this activity. Software Quality Assurance - The goals within this KPA emphasize (1) the verification of the adherence of software products and activities to applicable standards, procedures, and requirements and (2) the reporting of noncompliance issues that cannot be resolved within the project to senior management. AAC has an automated data processing system integrity guideline and a systems integration service (SIS) group that has a reporting channel to senior management and is independent of the project managers. However, projects do not have SQA plans; the SIS group does not participate in certain SQA functions, such as the preparation, review, and audit of projects’ software development plans, standards, and procedures; and members of the SIS group are not trained to perform their SQA activities. Software Configuration Management - AAC performs software (i.e., code only) change control using a tool called “ENDEVOR,” and its employees are trained in the use of this tool. However, the scope of the goals within this KPA cover all products in the entire software life cycle and not just the software code. AAC has not identified software work products (with the exception of software code) that need to be placed under configuration management, established a configuration management library system that can be used as a repository for software work products, or established a software configuration control board. Unless both VBA and AAC initiate improvement activities within the various KPAs and accelerate those already underway, they are unlikely to produce and maintain high-quality software on time and within budget. Because VBA and AAC do not satisfy any of the KPAs required for a level 2 (i.e., repeatable) capability, there is no assurance that (1) investments made in new software development will achieve their operational improvement objectives or (2) software will be delivered consistent with cost and schedule estimates. To better position VBA and AAC to develop and maintain their software successfully and to protect their software investments, we recommend that the Secretary of Veterans Affairs take the following actions: Delay any major investment in software development beyond that which is needed to sustain critical day-to-day operations until the repeatable level of process maturity is attained. Obtain expert advice to assist VBA and AAC in improving their ability to develop high-quality software, consistent with criteria promulgated by SEI. Develop an action plan, within 6 months from the date of this letter, that describes a strategy to reach the repeatable level of process maturity. Implement the action plan expeditiously. Ensure that any future contracts for software development require the contractor have a software development capability of at least CMM level 2. VBA comments responded to its SCE results, and VA comments responded to the SCE results for AAC. In commenting on a draft of this report, the Veterans Benefits Administration (VBA) agreed with four of our recommendations and disagreed with one recommendation. VBA stated that while it agreed that a repeatable (i.e., level 2) level of process maturity is a goal that must be attained, it disagreed that “...all software development beyond that which is day-to-day critical must be curtailed...” VBA further stated that the payment system replacement projects, the migration of legacy systems, and other activities to address the change of century must continue. While we agree that the software conversion or development activities required to address issues such as the change of century or changes to legislation must continue, we would characterize these as sustaining critical day-to-day operations. However, major system development initiatives in support of major projects such as the system modernization effort, which involves several system replacement projects and the migration of legacy systems, and VETSNET, which includes several payment system replacement projects, should be reassessed for risk of potential schedule slippage, cost overrun, and shortfall in anticipated system functions and features. Shortcomings such as these are more likely from organizations with a software development maturity rating below level 2 (i.e., the repeatable level). Therefore, to minimize software development risks, we continue to believe that VBA should delay any major investment in software development unless it is required to sustain day-to-day operations, until a maturity rating of level 2 is reached. Regarding the remaining four recommendations, we are pleased to see that VBA is already initiating positive actions, including acquiring the assistance of the Software Engineering Institute. VA stated that we did not demonstrate a willingness or flexibility in relating AAC documentation products, activities, and terms to the SEI terms. We reviewed all documentation provided to us by VA including the documents listed in their comments on our draft report. As called for by the SCE methodology, we carefully compared all this documentation to the SEI CMM criteria. As stated throughout our report, we found some strengths but in many cases, VA’s documentation was not commensurate with that called for by the SCE methodology. Our comments on the specific key process areas follow. The VA comments stated that the OFM/IRM Business Agreement, dated September 1994, contains guidelines which mandate the management of software requirements. However, in our review of the documentation listed under requirements management (Enclosure 1: Documents Addressing Key Process Area), we found no evidence that these documents addressed any of the goals of this KPA. For example, (1) the allocated requirements are neither managed, controlled, nor baselined, and (2) no software development plans were developed based on the allocated requirements. VA feels that the AAC Business Agreement and the negotiated quarterly contract satisfies this KPA; however, we found that AAC does not perform a majority of the activities required to meet the goals within this KPA. For example, AAC was not able to submit evidence for estimating software size and cost, nor did AAC demonstrate any methodology used for estimating schedules. VA stated that project size and risk remain consistent throughout the development/implementation cycle. However, AAC did not provide our SCE team with any evidence validating this assertion and, as discussed on page 8, AAC does not track this information. VA claims that specific written policies and procedures are followed when managing software contracts; however, AAC staff interviewed were unable to provide us with any specific policies or procedures used for software contracting. The AAC staff acknowledged that they do not track (1) software contractor performance at the coding level (i.e., track functionality only) or (2) contractor produced software documentation. Regarding training for software contract management, VA stated that its COTRs receive training in procurement, project management, and evaluating contractor performance. However, there is no indication that these courses are specific to software contracting. In addition, other individuals involved in establishing the software contract for the projects reviewed had not received contract management training related to software. VA states that its ADP System Integrity Guide, dated September 1994, contains detailed procedures directing the SIS group in specific SQA functions. Although this is a good first step, the AAC is still deficient because it does not have project specific software quality assurance plans that are implemented for individual projects, as requuired by this KPA within the CMM. Furthermore, we were not provided with any evidence showing that the ADP System Integrity Guide has been officially issued or whether its use will be mandatory or discretionary. The VA comments do not present any additional evidence that would help to satisfy the criteria for this KPA. Specifically, communication between the SIS, AAC staff, and customer do not substitute for the rigor and discipline of a software configuration control board, which VA acknowledged they do not have. Furthermore, the placement of software code under configuration management is not sufficient to satisfy this KPA because other software work products—such as system design specifications, database specifications, and computer program specifications—are also required. Finally, although the AAC does maintain a library of those software work products that it does produce, the products are not maintained under a formal software configuration management discipline, which would include version control and rigorous requirements traceability. We are sending copies of this report to the Chairmen and Ranking Minority Members of the House and Senate Committees on Veterans Affairs and the House and Senate Committees on Appropriations; the Secretary of Veterans Affairs; and the Director, Office of Management and Budget. Copies will also be made available to other interested parties upon request. This work was performed under the direction of William S. Franklin, Director, Information Systems Methodology and Support, who can be reached at (202) 512-6234. Other major contributors are listed in appendix IV. The following is GAO’s comment on the Department of Veterans Affairs’ May 24, 1996, letter. 1. This issue is not addressed in our report. To establish a common understanding between the customer and the software project of the customer’s requirements that will be addressed by the software project. Goal 1 System requirements allocated to software are controlled to establish a baseline for software engineering and management use. Goal 2 Software plans, products, and activities are kept consistent with the system requirements allocated to software. To establish reasonable plans for performing the software engineering and for managing the software project. Goal 1 Software estimates are documented for use in planning and tracking the software project. Goal 2 Software project activities and commitments are planned and documented. Goal 3 Affected groups and individuals agree to their commitments related to the software project. To provide adequate visibility into actual progress so that management can take effective actions and when the software project’s performance deviates significantly from software plans. Goal 1 Actual results and performances are tracked against the software plans. Goal 2 Corrective actions are taken and managed to closure when actual results and performance deviate significantly from the software plans. Goal 3 Changes to software commitments are agreed to by the affected groups and individuals. To select qualified software subcontractors and manage them effectively. Goal 1 The organization selects qualified software subcontractors. Goal 2 The organization and the software subcontractor agree to their commitments to each other. Goal 3 The organization and the software subcontractor maintain ongoing communications. Goal 4 The organization tracks the software subcontractors’ actual results and performance against its commitments. (continued) To provide management with appropriate visibility into the process being used by the software project and of the products being built. Goal 1 Software quality assurance activities are planned. Goal 2 Adherence of software products and activities to the applicable standards, procedures, and requirements is verified objectively. Goal 3 Affected groups and individuals are informed of software quality assurance activities and results. Goal 4 Noncompliance issues that cannot be resolved within the software project are addressed by senior management. To establish and maintain the integrity of products of the software project throughout the project’s software life cycle. Goal 1 Software configuration management activities are planned. Goal 2 Selected software work products are identified, controlled, and available. Goal 3 Changes to identified software work products are controlled. Goal 4 Affected groups and individuals are informed of the status and content of software baselines. David Chao, SCE Team Leader Gary R. Austin, SCE Team Member K. Alan Merrill, SCE Team Member Madhav S. Panwar, SCE Team Member Keith A. Rhodes, SCE Team Member Paul Silverman, SCE Team Member The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed software development processes and practices at the Department of Veterans Affairs' Veterans Benefits Administration (VBA) and Austin Automation Center (AAC). GAO found that: (1) neither VBA nor AAC satisfy any of the criteria for a repeatable software development capability; (2) VBA and AAC do not adequately define systems requirements, train personnel, plan software development projects, estimate costs or schedules, track software project schedules or changes, manage software subcontractors, or maintain quality assurance and software configuration procedures; (3) VBA initiatives to improve its software development processes include developing and distributing interim configuration management procedures, identifying a library structure for all work products, and meeting with the Software Engineering Institute (SEI) to discuss software development; (4) VBA and AAC cannot reliably develop and maintain high-quality software on any major project within existing cost and schedule constraints; and (5) VBA and AAC can use their strengths in software quality assurance and their improvement activities in software configuration management as a foundation for improving their software development processes. |
Since the 1940s, DOE and its predecessors have operated a nationwide complex of facilities used to research, design, and manufacture nuclear weapons and related technologies. The environmental legacy of nuclear weapons production at dozens of these sites across the United States includes contaminated buildings, soil, water resources, and large volumes of radioactive and hazardous wastes that require treatment, storage, and disposal. The two sites that account for the majority of the costs of the cleanup effort—Hanford and SRS—were established in the 1940s and 1950s, respectively, to produce plutonium and other nuclear materials needed to manufacture nuclear weapons. EM manages cleanup projects at these and other sites that involve multiple activities to treat and dispose of a wide variety of radioactive and hazardous wastes. Under federal and state laws, EM must clean up radioactive and hazardous substances in accordance with specified standards and regulatory requirements. EM carries out its cleanup activities under the requirements of federal environmental laws that include, among others, CERCLA and NEPA. CERCLA requires EM to evaluate the nature and extent of contamination at the sites and determine what cleanup remedies, if any, are necessary to protect human health and the environment into the future. Under NEPA, EM must prepare an environmental impact statement that assesses the environmental effects for a proposed agency action, all reasonable alternatives, and the no-action alternative. Under both the CERCLA and NEPA processes, EM analyzes proposed remedial action alternatives according to established criteria, invites and considers public comment, and prepares a Record of Decision that documents the selected agency action. If the cleanup method selected under CERCLA or NEPA will result in disposal of waste at an on-site disposal facility, EM is then required, under DOE’s radioactive waste management order—DOE Order 435.1—to ensure that waste management activities at each disposal facility are designed, constructed, and operated in a manner that protects workers, the public, and the environment. EM does this by completing a “performance assessment” of the selected cleanup method. To guide the implementation of selected cleanup methods, EM and its contractors may prepare a “system plan” that provides the basis for scheduling cleanup operations and preparing budget requests. For example, both Hanford and SRS have prepared system plans for treating and disposing of liquid radioactive waste stored in aging and leak-prone underground tanks. EM officials at DOE headquarters and field offices oversee cleanup activities at the sites, but the work itself is carried out primarily by private firms contracting with DOE. EM applies different approaches to managing cleanup activities, depending on the type and extent of contamination and state and/or federal requirements with which it needs to comply. In addition, DOE has agreements with state and federal regulators, known as Federal Facility Agreements, to clean up the Hanford and SRS sites. The agreements lay out legally binding milestones for completing major steps in the waste treatment and cleanup process. EPA officials, as well as officials with environmental agencies in the states where EM sites are located, enforce applicable federal and state environmental laws and oversee and advise EM on its cleanup efforts. One tool EM uses in support of cleanup decision analyses is computer modeling. Although the computer models used across EM sites vary, they have certain common characteristics. In general, computer models are based on mathematical formulas that are intended to reflect physical, biogeochemical, mechanical, or thermal processes in simplified ways. For example, a computer model can simulate the movement of contamination through the soil and groundwater or simulate the transfer of high-level radioactive waste from underground storage tanks to facilities where the waste will be treated. Appendix II details the key computer models used in the cleanup decisions we reviewed at Hanford and SRS. EM uses computer models to provide critical information for its decision- making process. First, computer models provide information that EM uses to analyze the effectiveness of alternative actions to clean up radioactive waste. Second, once a cleanup strategy has been selected, computer modeling provides information that EM needs to assess the performance of the selected cleanup strategy in reducing risks to human health and the environment. Third, EM uses computer models to simulate operations in the cleanup process, providing the basis for planning cleanup efforts and for making annual budget requests. EM’s decision making for its cleanup efforts is based on meeting federal and state requirements; input from state, local, and regional stakeholders; and other considerations, including the costs of cleanup actions. Computer models provide critical information that EM needs to assess compliance with regulatory requirements when seeking to identify and select alternatives for cleaning up radioactive and hazardous wastes, as well as contaminated soil and groundwater at its sites. EM’s cleanup decisions are guided by several federal and state environmental laws, including CERCLA and NEPA, which both set forth processes related to cleanup decisions. In the case of CERCLA, EM determines the nature and extent of the contamination, assesses various cleanup alternatives, and selects the best alternatives according to evaluation criteria that include, among other things, protection of human health and the environment, ease of implementing the alternative, state and community acceptance, and cost. To accomplish these steps, EM uses computer modeling to, among other things, simulate the movement of contaminants through soil and groundwater over many years assuming no cleanup action is taken. Projected contamination levels, migration pathways, and contamination travel timelines are provided by simulations and are evaluated to determine whether regulatory standards will likely be exceeded in the future. If action is needed, then modeling simulations may be conducted for a number of different cleanup alternatives. For example, EM used modeling to assess contamination and the potential effectiveness of various cleanup strategies at SRS’s C-Area Burning/Rubble Pit. Used during the 1960s as a trash pit to dispose of organic solvents, waste oils, paper, plastics, and rubble, SRS burned the contents of the pit periodically to reduce its volume. Eventually, SRS used the pit for the disposal of inert rubble, finally covering it with two feet of soil in the early 1980s. However, the disposal of these materials and periodic burning resulted in hazardous substance contamination of the surrounding soil and groundwater. Between 1999 and 2004, EM implemented several actions to clean up the majority of the area’s contamination. Following these actions, EM used computer models to simulate the movement of the remaining contamination through the soil and groundwater over the next 1,000 years. Information provided by this modeling helped EM to identify the remaining risks to human health and the environment and to identify actions to clean up the remaining contamination. Using this information, in conjunction with other criteria such as additional site data, input from federal and state regulators and the public, and the availability of an appropriate cleanup technology, EM selected a final cleanup remedy. This remedy, which is ongoing and combines several different cleanup technologies, was estimated in 2008 to cost, in present-worth dollars, about $1.9 million over a 70-year period. In implementing CERCLA, DOE focuses on discrete facilities or areas within a site that are being remediated, making limited assessments of cumulative impacts. By contrast, under NEPA, EM generally prepares environmental impact statements that assess the environmental impacts— including cumulative impacts—of a proposed cleanup action, all reasonable alternatives, and taking no action. For example, the environmental impact statement for closing underground liquid radioactive waste tanks at Hanford—which, as of November 2010 was still in draft form—includes an analysis of the potential environmental impact of various options for treating and disposing of about 55 million gallons of mixed radioactive and hazardous waste and closing 149 underground radioactive waste tanks. The draft environmental impact statement includes an analysis of 11 tank waste treatment and closure alternatives, including a no-action alternative. These alternatives range in cost from about $3 billion to nearly $252 billion, excluding the costs associated with the final disposal of the treated waste. In the draft environmental impact statement, EM used computer models to simulate the movement of contamination through soil and groundwater over a period of 10,000 years for each of the cleanup alternatives. As with CERCLA modeling, the results of the computer models were used to estimate the remaining risks to human health and the environment following the completion of each cleanup alternative and these risks were then compared with requirements. The results of these models will be used along with other information such as input from regulators and the public and the costs of each alternative when EM selects the alternative it will eventually implement. After a particular cleanup alternative is selected, EM also uses computer modeling to demonstrate that the cleanup activity will result in reduced future contamination levels that meet regulatory requirements. If the cleanup method selected under CERCLA or NEPA will result in disposal of waste at an on-site disposal facility, EM is then required, under DOE’s radioactive waste management order—DOE Order 435.1—to ensure that waste management activities at each disposal facility are designed, constructed, and operated in a manner that protects workers, the public, and the environment. To meet the requirements of the order, EM completes a “performance assessment” of the selected cleanup method. Under the order, this performance assessment is to document that the disposal facility is designed, constructed, and operated in a manner that protects workers, the public, and the environment. The performance assessment also is to project the release of contamination into the soil and groundwater from a site after cleanup and must include calculations of potential chemical doses to members of the public in the future. For example, in March 2010, SRS issued a performance assessment of a cleanup and closure strategy for a group of 20 underground liquid radioactive waste tanks, known as the F-Tank Farm. The performance assessment evaluated closing the underground waste tanks and filling them with a cement-like substance called grout—the alternative selected following completion of SRS’s 2002 environmental impact statement. Computer modeling was used extensively to prepare this performance assessment. Specifically, computer modeling was performed using two different types of models. The first computer model was used to perform human health and environmental risk calculations and to calculate radiation doses that could be compared to the maximum level allowed by federal and state requirements. The second model was used to analyze sensitivities and uncertainties in the results of the first model. EM also uses computer models for lifecycle planning, scheduling, and budgeting for its cleanup activities. Computer models provide important information that EM and its contractors use to develop system plans that outline the schedules for cleanup activities at EM sites. Outputs from computer models and databases are used to create tables, charts, and schedules that are published in the system plans and inform annual budget requests for cleanup activities. For example, at Hanford, a computer model known as the Hanford Tank Waste Operations Simulator is designed to track the retrieval and treatment of over 55 million gallons of radioactive waste held in underground storage tanks. According to the most recent Hanford tank waste system plan, which was issued in November 2010, the model projects the chemical and radiological characteristics of batches of waste that are to be sent to a $12.2 billion waste treatment plant that is being built at Hanford to treat this waste. The model also provides scheduling information the contractor uses to project near- and long-term costs and schedules. Similarly, SRS uses a computer model known as SpaceMan Plus™ to support the site’s liquid waste system plan, which was issued in January 2010. For example, project work schedules for SRS’s tank waste program are guided by this model. The model also simulates how the tank farms integrate with waste processing facilities and tracks the movement of waste throughout the liquid waste system. Output from the model was used to provide tables and schedules found in the appendixes of SRS’s system plan that details the specific cleanup activities that are to be accomplished. These tables and schedules are used as part of the basis for determining the costs of completing those activities. This information, in turn, allows DOE and its contractors to generate annual budget requests. Although EM uses general departmental quality assurance policies and standards that apply to computer models and relies on contractors to implement specific procedures that reflect these policies and standards, these policies and standards do not specifically provide guidance on ensuring the quality of the computer models used in cleanup decisions. Moreover, EM officials have not regularly performed periodic quality assurance assessments, as required by DOE policy to oversee contractors’ development and use of cleanup models and the models’ associated software. In addition, DOE and others have identified quality assurance problems. For example, the state of Washington has cited flaws in a model EM uses to analyze soil and groundwater contamination and has told EM that it will no longer accept the use of this model for chemical exposure analysis at Hanford. DOE addresses quality through various departmental policies and industry standards; however these policies and standards do not specifically provide guidance on ensuring the quality of the computer models used in cleanup decisions. Specifically, DOE’s primary quality assurance policy— DOE Order 414.1C—provides general requirements EM and its contractors must meet to ensure all work at the cleanup sites is carried out correctly and effectively, including the development and use of computer models. These requirements include developing a quality assurance program, training staff how to check the quality of their work, and providing for independent assessments of quality. A manual accompanying this order describes acceptable, nonmandatory methods for specifically ensuring quality of “safety software.” Safety software is described in the manual as software used to design, manage, or support nuclear facilities. However, the manual is less clear on how to assure quality in computer models. Furthermore, it does not clearly address the use of computer software not considered as safety software, such as those used by computer models that support DOE’s cleanup decisions. DOE’s quality assurance order also requires contractors to select and comply with an appropriate set of industry standards for all work, including computer modeling. One common set of standards was developed by the American Society of Mechanical Engineers and provides the requirements necessary to ensure safety in nuclear facilities, including the development and validation of computer models and software that is used to design and operate such facilities. Initially, the American Society of Mechanical Engineers standards were not mandatory for computer models and software used for cleanup decisions, many of which are considered nonsafety software. These standards were but one of many standards that contractors could choose to use. However, as of November 2008, EM made the American Society of Mechanical Engineers standards mandatory for all cleanup activities, including modeling. EM’s contractors are to implement DOE’s quality assurance requirements using specific policies and procedures they develop. The specifics of implementation vary from contractor to contractor. In the case of computer software quality, a contractor is to include procedures for testing and validating the software, ensuring changes to software are properly documented, and correcting any errors. EM allows its contractors to take a “graded approach” to quality procedures for computer software, which means the contractor may adjust the rigor of the quality procedures to match the importance of the software to overall operations. According to documents we reviewed, computer software that controls systems in a nuclear facility, for example, would require more rigorous quality procedures than an administrative payroll system, as any failure in the software controlling a nuclear facility could result in potentially hazardous consequences to workers, the public, and/or the environment. EM is to oversee its contractors’ implementation of quality standards for computer models by performing periodic quality assurance assessments, according to DOE’s quality assurance order. These quality assurance assessments are intended to ensure that computer models meet DOE and accepted industry quality standards. In our review of eight cleanup decisions at Hanford and SRS, we found EM had conducted only three quality assurance assessments that addressed quality standards for the models used in those decisions. For example, for three of the four decisions we reviewed at SRS, DOE officials at SRS could not provide quality assurance assessments that specifically addressed whether the models used in those decision processes met DOE’s quality assurance requirements. DOE officials at SRS provided three general quality assurance assessments, but these quality assurance assessments did not specifically look at the cleanup models. In contrast, the models for a March 2010 performance assessment selecting a cleanup strategy to close underground liquid waste tanks at SRS did receive a quality assurance assessment by a DOE headquarters group established to review performance assessment decisions. In particular, as part of the review, among other things, the DOE group conducted a quality assurance assessment that evaluated the quality of the computer models used in the performance assessment and the degree to which the models complied with DOE requirements and industry standards. A DOE quality assurance official at SRS noted that the site relies primarily on its contractors to perform quality assurance assessments of computer models and their associated software. Similarly, in our review of four cleanup decisions at Hanford, we found that EM had performed assessments that addressed quality standards for the models used in those decisions in only two cases. In fact, one quality assurance assessment was only undertaken after a contractor discovered data quality errors in 2005 in a computer model used to support a prior environmental impact statement at Hanford. According to a DOE quality assurance manager at Hanford, his office conducts quality assurance assessments primarily on those computer models and the associated software for which the failure would result in significant safety consequences to workers, the public, and/or the environment. Concerns have been raised by DOE and others that EM does not have complete assurance of the quality of the models. For example: Citing a number of flaws in a model DOE uses to analyze soil and groundwater contamination at Hanford, the Washington state Department of Ecology told DOE in February 2010 that it would no longer accept the use of this model for chemical exposure analysis at Hanford. For example, Ecology cited previous concerns that the model was not robust enough to capture complexities of the movement of contamination through the subsurface soil. We found that DOE had conducted no specific quality assurance reviews on the model and its associated software. EM headquarters officials conducted two technical reviews in 2009 of planning models used for tank waste operations at Hanford and SRS. The review of the Hanford planning model found that the model has limited ability to sufficiently predict the composition of the contaminated waste as it is prepared for the treatment processes. The review team cautioned that this limitation raised a significant risk that, when actual waste treatment operations started at the site, the waste may not meet the acceptance requirements for processing by Hanford’s treatment facility. In addition, the review of SRS’s planning model found that, although the data the model provided on tank waste operations were reasonable, the model did not have the ability to optimize operating scenarios, which hampered the site’s long-term planning abilities. A March 2010 independent review commissioned by a Hanford citizen’s group raised concerns about a model used in the preparation of a draft environmental impact statement of alternatives for closing Hanford’s waste tanks. These concerns, based on reviewing the draft statement, included insufficient documentation of the quality assurance processes followed for the model and that modeling uncertainties were inadequately quantified. The review concluded that the environmental impact statement was insufficiently precise to be used to make a cleanup decision. Where DOE has conducted quality assurance assessments, it has found that contractors did not always implement quality requirements consistently. Furthermore, in their own internal reviews, contractors have noted problems with the implementation of quality assurance requirements. Problems noted in DOE’s and contractors’ quality assurance assessments include: Inadequate documentation. A 2007 software quality review conducted by DOE at Hanford found implementation problems, including inadequate documentation and improper training for personnel in quality procedures. At SRS, two general software quality assurance reviews performed by DOE in 2004 found that while contractors generally met quality requirements, documentation was sometimes lacking or improperly prepared. A similar 2007 DOE review at SRS found a good software quality program overall, but listed a number of deficiencies including inadequate software plans and procedures. Not following correct procedures. A 2007 DOE review of a Hanford contractor’s software quality assurance program found, among other things, that not all contractor personnel fully understood software quality requirements. The report stated that, although software quality assurance training had been provided, personnel did not follow procedures in managing, maintaining, and overseeing software quality. For example, the report cited an example of a spreadsheet in which data input cells were not properly locked, in violation of procedures. In addition, the report noted that software documentation was not periodically updated, as required, because staff did not fully understand the procedures. Incorrect quality assurance grading. In some cases, contractors did not always correctly determine the level of rigor needed to ensure the quality of computer models and their associated software. For example, a 2007 internal contractor review at Hanford found that 23 of 138 software codes registered in a central repository were incorrectly designated as nonsafety software, when in fact they should have been considered safety software. As a result, the quality assurance procedures appropriate for a given level of risk may not have always been applied. Although EM has recently begun some efforts to promote consistency in the use of models across its various sites, these efforts are still in early stages and, to date, some have had limited involvement of modeling officials at the sites and federal, state, and local stakeholders who are affected by decisions made using the output of computer models. In addition, these efforts are not part of a comprehensive, coordinated effort to improve the management of computer models across EM. In the absence of such a strategy, EM also does not have overarching guidance promoting consistency in modeling management, development, and use across EM’s sites. EM has begun some efforts to improve the use of computer models across its various sites. For example, EM, in fiscal year 2010, began developing a set of state-of-the-art computer models to support soil and groundwater cleanup across the nuclear weapons complex. According to EM officials and documentation they provided, this initiative, called the Advanced Simulation Capability for Environmental Management, will allow EM to provide more sophisticated analysis of soil and groundwater contamination for cleanup decisions. Although the initiative’s director told us that the goal is to encourage all sites to use these models for all of their soil and groundwater analysis, he noted that there are no plans to make using these models mandatory. Moreover, SRS has created a forum for improving consistency in groundwater computer modeling performed at the site. According to the charter document, the forum, called the Groundwater Modeling Consistency Team, was formed in 2006 following the discovery of inconsistencies in the data used in groundwater computer modeling conducted at Hanford in support of the preparation of an environmental impact statement under NEPA. The group, which is made up of DOE and contractor officials, reviews software codes, model inputs, and model assumptions to promote sitewide consistency in the management of computer models. Although these efforts may help improve EM’s use of computer models, they are largely still in early stages. In addition, according to EM officials, some of these efforts have, to date, had limited involvement of modeling officials at EM’s sites and of federal, state, and local stakeholders who are affected by decisions made using the output of computer models. Furthermore, they are not part of a comprehensive, coordinated effort to improve the consistency of computer models and reduce duplication across EM’s various sites. For example, we found that different models are used to perform similar functions not only between EM sites, but also within sites. At SRS, one contractor uses a set of models to perform soil and groundwater analyses when evaluating the potential effectiveness of cleanup alternatives under CERCLA and NEPA, while another contractor uses a different set of models to perform similar analyses for performance assessments under DOE’s radioactive waste management order. Each contractor has its own set of procedures for developing and using each computer model. Officials from both contractors told us that they use different models because state and federal regulators have only approved the use of certain models for specific types of cleanup decisions. Issues with consistency and duplication of effort in the use of computer models have also been noted by others. For example, a February 2010 DOE review noted that five major DOE sites use 28 different models to analyze groundwater and subsurface contamination when preparing performance assessments under DOE’s radioactive waste management order. DOE officials told us that past modeling practices have resulted in conflicting assumptions and data sets, as well as different approaches to uncertainty analyses. In addition, a September 2009 DOE technical review of the Hanford tank waste modeling system raised concerns that two models at Hanford that share data use different assumptions that could lead to inconsistencies between the two. As a result, the Hanford waste treatment system plan, which is based on the output of one of these models, may not reflect the most current information. In contrast, other federal agencies and DOE offices have taken steps to improve consistency and reduce duplication as part of a comprehensive, coordinated strategy to manage the use of computer models. For example, EPA organized a Center for Regulatory Environmental Modeling in 2000 as part of a centralized effort to bring consistency to model development, evaluation, and usage across the agency. The Center brings together senior managers, modelers, and scientists from across the agency to address modeling issues. Among its tasks are to help the agency (1) establish and implement criteria so that model-based decisions satisfy regulatory requirements; (2) implement best management practices to use models consistently and appropriately; (3) facilitate information exchange among model developers and users so models can be continuously improved; and (4) prepare for the next generation of environmental models. According to a DOE official, EM does not have a central coordination point similar to EPA’s. Within DOE, the Office of Nuclear Energy recently established an initiative—the Nuclear Energy Modeling and Simulation Energy Innovation Hub—that provides a centralized forum for nuclear energy modelers. According to the director of the Office of Nuclear Energy’s Office of Advanced Modeling and Simulation, the hub will provide a more centrally coordinated effort to bring together modeling and simulation expertise to address issues associated with the next generation of nuclear reactors. Similar comprehensive, coordinated efforts are lacking within EM and, as a result, EM may be losing opportunities to improve the quality of its models, reduce duplication, keep abreast of emerging computer modeling and cleanup technologies, and share lessons learned across EM’s sites. The need for specific guidance for ensuring the careful management of computer models used in decision making is not new. As early as 1976, we reported on the government’s use of computer models and found that the lack of guidance contributed to ineffective and inefficient use of computer models. We noted that guidance should define the problem to be solved, specify the assumptions and limitations of the model, and provide methods to test whether the model reasonably describes the physical system it is modeling. More recently, a 2007 National Research Council study of modeling at EPA laid out guidelines to improve environmental regulatory computer modeling. The study noted that adoption of a comprehensive strategy for evaluating and refining EPA’s models could help the agency add credibility to decisions based on modeling results. It also noted several key principles to follow for model development, evaluation, and selection. Moreover, the study recommended that peer review be considered as an important tool for improving model quality. According to the study, a peer review should entail not only an evaluation of the model and its output, but also a review of the model’s origin and its history. The study also made recommendations on quantifying and communicating uncertainty in model results to better communicate a model’s limitations to stakeholders affected by decisions made using the results of computer models. EPA has taken action to develop specific guidance, issuing a guide in 2009 addressing the management, development, and use of computer modeling used in making environmental regulatory decisions. In this guidance, EPA developed a set of recommended best practices to help mode lers effectively use computer models. The guidance defines the role of computer models in the public policy process, discusses appropriate ways of dealing with uncertainty, establishes criteria for peer review, and addresses quality assurance procedures for computer modeling. Even within DOE, another office outside of EM has recognized the need for specific guidance for managing computer models. Specifically, DOE’s Office of Civilian Radioactive Waste Management specified in its quality assurance requirements several requirements for computer models. These requirements included clearly defining the model’s objective, documenting alternative models that could be used and the rationales for not using them, and discussed a model’s limitations and uncertainties. In addition, the office specified in its requirements that, among other things, a computer model receive a technical review through a peer review or publication in a professional journal. Although the importance of comprehensive guidelines for managing computer models is well established, according to its officials, EM does not have such overarching guidance. As previously discussed, EM does have a manual accompanying its quality assurance order that describes acceptable methods for specifically ensuring the quality of safety software. However, the manual does not generally address models used in cleanup decisions. EM also has guidance addressing the management of computer models used in conducting performance assessments under its radioactive waste management order. Specifically, a DOE headquarters group that is charged with reviewing decisions made under this order—the Low-Level Waste Disposal Facility Federal Review Group—has developed a manual that contains guidance on, for example, ensuring that input data to computer models are described and are traceable to sources derived from, among other things, field data from the site and referenced literature that is applicable to the site. However, this guidance does not apply to computer models used to analyze the potential effectiveness of cleanup alternatives under CERCLA or NEPA or to computer models used for planning, scheduling, and budgeting purposes. As a result, computer models developed at various DOE sites do not have consistent criteria to define the role of the model in the decision-making process, consistent ways of dealing with uncertainties and a model’s limitations, and mechanisms to ensure computer model quality, such as quality assurance assessments and peer review. EM’s computer models provide critical information that is needed to make significant decisions about how to clean up the radioactive and hazardous legacy waste across the country. However, EM’s oversight of the quality of these models and its management of the development, evaluation, and use of the models has not always been commensurate with the models’ importance. Because the decisions EM makes must protect human health and the environment for thousands of years into the future, it is critical that the models on which EM bases its decisions are of the highest quality possible. In addition, because these cleanup efforts will take decades and cost billions of dollars, it is also important that models used for planning, scheduling, and budgeting purposes provide the most accurate data possible for EM and Congress to make informed decisions on cleanup activities. EM’s failure to fully oversee its contractors’ implementation of quality assurance procedures has led to a reduced level of confidence that the models reasonably represent the conditions they are meant to simulate. In several cases, we found necessary quality assurance reviews were not conducted. In others, reviews found that quality assurance procedures were inadequately implemented. Because existing quality assurance requirements that are applied to EM’s computer models have not been adequately implemented and, in some cases, are insufficiently understood by its contractors, EM and its contractors do not have an effective mechanism to provide the public and other EM stakeholders with assurance of a model’s quality. To its credit, EM is beginning to undertake efforts to improve the consistency of models across the nuclear weapons complex. However, some of these efforts are still in their infancy, and it remains to be seen whether any improvements in EM’s management of its models will result. We recognize that every site has its unique conditions and challenges and that a one-size-fits-all approach to modeling would not be appropriate. Nevertheless, there is room for additional consistency in model development and implementation, as well as a mechanism for sharing lessons learned among DOE’s various sites. For a number of years, other federal agencies and offices within DOE have recognized the importance of a comprehensive guidance for managing computer models. Without a comprehensive strategy and modeling guidance, EM may miss opportunities to improve the quality of computer models, promote consistency, reduce duplication across DOE sites, and share lessons learned. To help EM increase confidence in the quality of information provided to the public and its stakeholders resulting from the use of computer modeling, we recommend the Secretary of Energy take the following three actions: Clarify specific quality assurance requirements for computer models used in environmental cleanup decisions, including to analyze the potential effectiveness of cleanup alternatives, assess the performance of selected cleanup activities, and assist in planning and budgeting cleanup activities. Ensure that the models are assessed for compliance with these requirements. Develop a comprehensive strategy and guidance for the management of computer models to promote consistency, reduce duplication, and ensure sharing of lessons learned. We provided a draft of this report to DOE for its review and comment. In its written comments, DOE agreed with our recommendations and stated that modeling is an important component of management analysis and decision making for the department. DOE noted that it is committed to continuous improvement in model development and application and commented that our recommendations will strengthen its modeling efforts. DOE stated in its comments that it disagreed with the draft report’s assertion that its directives and standards fall short for the development and management of computer models. DOE commented that its quality assurance directives apply directly to the development, coding, and validation of safety and nonsafety computer models used in cleanup decisions and that EM has interpreted and applied these directives and accompanying standards to develop its quality program. We agree with DOE, and our draft report noted, that DOE addresses quality through various departmental policies and industry standards. However, these directives do not provide specific guidance to EM on assuring quality of the cleanup models themselves, guidance that other agencies and offices within DOE have developed. In particular, DOE’s primary quality assurance policy—DOE Order 414.1C—addresses general standards that EM and its contractors must meet to ensure all work at its sites is carried out effectively, but is vague on the specific steps that must be followed to ensure the quality of models used in cleanup decisions. In addition, as our draft report noted, a manual accompanying this order describes acceptable, nonmandatory methods for specifically ensuring quality of safety software. However, the manual is less clear on the use of computer software not considered as safety software, such as those used by computer models that support DOE’s cleanup decisions. Our recommendation that DOE clarify the specific quality assurance requirements for computer models used in environmental cleanup decisions is intended to address these problems. DOE’s comments also provided additional information on the department’s oversight of computer models, initiatives it is undertaking to improve its modeling efforts, and the specific steps it plans to take to address our recommendations. DOE also provided technical comments that we incorporated in the report as appropriate. DOE’s written comments are presented in appendix III. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Secretary of Energy; the Director, Office of Management and Budget; and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To determine how the Department of Energy’s (DOE) Office of Environmental Management (EM) uses computer modeling in cleanup decisions, we focused on cleanup decisions EM has made at its Hanford Site in Washington state and Savannah River Site (SRS) in South Carolina because together these two sites account for more than one-half of EM’s annual cleanup spending and approximately 60 percent of the total estimated cost of approximately $275 billion to $329 billion to clean up the entire nuclear weapons complex. We focused our review on decisions made in two major areas that represent the largest and most significant elements of the cleanup program at these two sites. The first is cleanup of radioactive and hazardous waste stored in underground tanks, which DOE has determined poses the most significant environmental safety and health threat in the cleanup program. DOE estimates cleaning up tank waste at the sites will cost between $87 billion and $117 billion, making it the largest cost element of EM’s cleanup program. Second, both sites have significant contamination to soil and groundwater, which DOE estimates will cost more than $12 billion to remediate. For each site, we selected three types of decisions that were representative of major decisions made at these sites between 2002 and 2010—(1) decisions made under environmental statutes, including the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, as amended (CERCLA)—which addresses specific environmental remediation solutions for a cleanup site—and the National Environmental Policy Act, as amended (NEPA)—under which DOE evaluates the impacts to human health and the environment of proposed cleanup strategies and possible alternatives; (2) performance assessments under DOE orders governing radioactive waste management; and (3) cleanup budgeting and planning decisions. We reviewed publicly available information from regulators and interviewed DOE officials and contractor staff to identify the most recent decisions for each of the three types of decisions selected for review at each site. We reviewed these decisions to identify the most recent decision that included the use of computer modeling. We then selected, based on input by EM officials, the main models used to support these decisions at the two sites. We visited both Hanford and SRS and spoke with both EM officials and contractor staff there to better understand the use of models in planning and cleanup decisions and DOE’s oversight of the models. We obtained demonstrations of these models, as well as information on how they were used in decision making. We obtained and reviewed the decision documents, as well as modeling studies, notes of meetings between DOE and its regulators to develop models, and other documentation showing how the models were used in decisions. We interviewed officials from DOE headquarters and the two sites, as well as contractor staff, to determine how the models work and how they were used in these decisions. We analyzed this information to determine how the results of computer models were used in making cleanup decisions, the importance of modeling in the selection of a cleanup strategy, and other factors that contributed to the selection of a cleanup strategy. To evaluate how EM determines the quality of the computer models used in cleanup decision making, we obtained and reviewed documentation showing the standards the models were required to meet. We gathered documentation on DOE standards, as well as policies and procedures from contractors overseeing the models. We discussed computer model and software standards with EM officials from EM’s sites, contractors at the sites, and headquarters officials. We also interviewed officials from the Defense Nuclear Facilities Safety Board, the National Research Council, the Environmental Protection Agency, and the Washington state Department of Ecology about existing standards for the use and implementation of computer modeling and its associated software. We analyzed EM policies and contractor procedures to determine what quality assurance standards exist to address the quality of computer models. We also requested from EM and its contractors all assessments that were conducted on computer models used in the decisions we were reviewing, indicating whether quality standards were met. In general, the assessments we reviewed were largely conducted by the contractors, regulators, or external sources, such as consultants. These reviews ranged from contractor-performed assessments of the implementation of quality standards for software, to federal and state regulator comments on the modeling output used to develop alternatives in a regulatory package, to an outside consultant-performed review on the appropriateness of modeling for selecting a preferred alternative from an environmental impact statement prepared under NEPA. We analyzed these assessments to understand the level of oversight EM provided to assure model and software quality, as well as the extent to which contractors were implementing quality procedures. To address EM’s overall strategy for managing computer models that are used in cleanup decisions, we interviewed DOE officials from headquarters and from each site. We also interviewed officials from the Environmental Protection Agency, National Research Council, DOE’s Office of Nuclear Energy, and DOE’s Office of Civilian Radioactive Waste Management about the implementation of computer modeling guidance and modeling coordination strategies. We reviewed modeling guidance from these organizations, as well as from the Office of Management and Budget. We focused our review on model quality assurance standards and the use of models in decision making, not on the quality of the models themselves or of their output. We conducted this performance audit from October 2009 to February 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Ryan T. Coles, Assistant Director; Ivelisse Aviles; Mark Braza; Dan Feehan; Nancy Kintner-Meyer; Jonathan Kucskar; Mehrzad Nadji; Kathryn Pedalino; Thomas C. Perry; and Benjamin Shouse made key contributions to this report. Nuclear Waste: Actions Needed to Address Persistent Concerns with Efforts to Close Underground Radioactive Waste Tanks at DOE’s Savannah River Site. GAO-10-816. Washington, D.C.: September 14, 2010. Recovery Act: Most DOE Cleanup Projects Appear to Be Meeting Cost and Schedule Targets, but Assessing Impact of Spending Remains a Challenge. GAO-10-784. Washington, D.C.: July 29, 2010. Department of Energy: Actions Needed to Develop High-Quality Cost Estimates for Construction and Environmental Cleanup Projects. GAO-10-199. Washington, D.C.: January 14, 2010. Nuclear Waste: Uncertainties and Questions about Costs and Risks Persist with DOE’s Tank Waste Cleanup Strategy at Hanford. GAO-09-913. Washington, D.C.: September 30, 2009. Department of Energy: Contract and Project Management Concerns at the National Nuclear Security Administration and Office of Environmental Management. GAO-09-406T. Washington, D.C.: March 4, 2009. Nuclear Waste: DOE Lacks Critical Information Needed to Assess Its Tank Management Strategy at Hanford. GAO-08-793. Washington, D.C.: June 30, 2008. Hanford Waste Treatment Plant: Department of Energy Needs to Strengthen Controls over Contractor Payments and Project Assets. GAO-07-888. Washington, D.C.: July 20, 2007. Nuclear Waste: DOE Should Reassess Whether the Bulk Vitrification Demonstration Project at Its Hanford Site Is Still Needed to Treat Radioactive Waste. GAO-07-762. Washington, D.C.: June 12, 2007. Hanford Waste Treatment Plant: Contractor and DOE Management Problems Have Led to Higher Costs, Construction Delays, and Safety Concerns. GAO-06-602T. Washington, D.C.: April 6, 2006. Nuclear Waste: Absence of Key Management Reforms on Hanford’s Cleanup Project Adds to Challenges of Achieving Cost and Schedule Goals. GAO-04-611. Washington, D.C.: June 9, 2004. Nuclear Waste: Challenges to Achieving Potential Savings in DOE’s High-Level Waste Cleanup Program. GAO-03-593. Washington, D.C.: June 17, 2003. Nuclear Waste: Department of Energy’s Hanford Tank Waste Project— Schedule, Cost, and Management Issues. GAO-RCED-99-13. Washington, D.C.: October 8, 1998. | The Department of Energy's (DOE) Office of Environmental Management (EM) is responsible for one of the world's largest cleanup programs: treatment and disposal of radioactive and hazardous waste created as a by-product of nuclear weapons production and energy research at sites across the country, such as EM's Hanford Site in Washington State and the Savannah River Site (SRS) in South Carolina. Computer models--which represent physical and biogeochemical processes as mathematical formulas--are one tool EM uses in the cleanups. GAO was asked to (1) describe how EM uses computer models in cleanup decisions; (2) evaluate how EM ensures the quality of its computer models; and (3) assess EM's overall strategy for managing its computer models. GAO analyzed the use of selected models in decisions at Hanford and SRS, reviewed numerous quality assurance documents, and interviewed DOE officials as well as contractors and regulators. EM uses computer models to support key cleanup decisions. Because the results of these decisions can cost billions of dollars to implement and take decades to complete, it is crucial that the models are of the highest quality. Computer models provide critical information to EM's cleanup decision- making process, specifically to: (1) Analyze the potential effectiveness of cleanup alternatives. For example, computer models at SRS simulate the movement of contaminants through soil and groundwater and provide information used to predict the effectiveness of various cleanup strategies in reducing radioactive and hazardous material contamination. (2) Assess the likely performance of selected cleanup activities. After a particular cleanup strategy is selected, EM uses computer modeling to demonstrate that the selected strategy will be designed, constructed, and operated in a manner that protects workers, the public, and the environment. (3) Assist in planning and budgeting cleanups. EM also uses computer models to support lifecycle planning, scheduling, and budgeting for its cleanup activities. For example, a Hanford computer model simulates the retrieval and treatment of radioactive waste held in underground tanks and provides information used to project costs and schedules. EM uses general departmental policies and industry standards for ensuring quality, but they are not specific to computer models used in cleanup decisions. EM has not regularly performed periodic quality assurance assessments, as required by DOE policy, to oversee contractors' development and use of cleanup models and the models' associated software. In our review of eight cleanup decisions at Hanford and SRS that used computer modeling as a critical source of information, GAO found EM conducted required assessments of the quality of computer models in only three cases. In addition, citing flaws in a model EM uses to analyze soil and groundwater contamination, regulators from Washington state have told EM that it will no longer accept the use of this model for chemical exposure analysis at Hanford. EM does not have an overall strategy for managing its computer models. EM has recently begun some efforts to promote consistency in the use of models. For example, it is developing a set of state-of-the-art computer models to support soil and groundwater cleanup decisions across its sites. However, these efforts are still in early stages and are not part of a comprehensive, coordinated effort. Furthermore, although other federal agencies and DOE offices have recognized the importance of comprehensive guidance on the appropriate procedures for managing computer models, EM does not have such overarching guidance. As a result, EM may miss opportunities to improve the quality of computer models, reduce duplication between DOE sites, and share lessons learned across the nuclear weapons complex. GAO recommends that DOE (1) clarify specific quality assurance requirements for computer models used in environmental cleanup decision making; (2) ensure that the models are assessed for compliance with these requirements; and (3) develop a comprehensive strategy and guidance for managing its models. DOE agreed with our recommendations. |
DOD is not receiving expected returns on its large investment in weapon systems. The total acquisition cost of DOD’s 2007 portfolio of major programs under development or in production has grown by nearly $300 billion over initial estimates. While DOD is committing substantially more investment dollars to develop and procure new weapon systems, our analysis shows that the 2007 portfolio is experiencing greater cost growth and schedule delays than the fiscal years 2000 and 2005 portfolios (see table 1). Total acquisition costs for programs in DOD’s fiscal year 2007 portfolio have increased 26 percent from first estimates—compared to a 6-percent increase for programs in its fiscal year 2000 portfolio. Total RDT&E costs for programs in 2007 have increased by 40 percent from first estimates, compared to 27 percent for programs in 2000. The story is no better when expressed in unit costs. Schedule delays also continue to impact programs. On average, the current portfolio of programs has experienced a 21-month delay in delivering initial operational capability to the warfighter, and 14 percent are more than 4 years late. Continued cost growth results in less funding being available for other DOD priorities and programs, while continued failure to deliver weapon systems on time delays providing critical capabilities to the warfighter. Put simply, cost growth reduces DOD’s buying power. As program costs increase, DOD must request more funding to cover the overruns, make trade-offs with existing programs, delay the start of new programs, or take funds from other accounts. Delays in providing capabilities to the warfighter result in the need to operate costly legacy systems longer than expected, find alternatives to fill capability gaps, or go without the capability. The warfighter’s urgent need for the new weapon system is often cited when the case is first made for developing and producing the system. However, DOD has already missed fielding dates for many programs and many others are behind schedule. Over the past several years our work has highlighted a number of underlying systemic causes for cost growth and schedule delays both at the strategic and at the program level. At the strategic level, DOD’s processes for identifying warfighter needs, allocating resources, and developing and procuring weapon systems—which together define DOD’s overall weapon system investment strategy—are fragmented and broken. At the program level, the military services propose and DOD approves programs without adequate knowledge about requirements and the resources needed to successfully execute the program within cost, schedule, and performance targets. DOD largely continues to define war fighting needs and make investment decisions on a service-by-service basis, and assess these requirements and their funding implications under separate decision-making processes. While DOD’s requirements process provides a framework for reviewing and validating needs, it does not adequately prioritize those needs and is not agile enough to meet changing warfighter demands. Ultimately, the process produces more demand for new programs than available resources can support. This imbalance promotes an unhealthy competition for funds that encourages programs to pursue overly ambitious capabilities, develop unrealistically low cost estimates and optimistic schedules, and to suppress bad news. Similarly, DOD’s funding process does not produce an accurate picture of the department’s future resource needs for individual programs—in large part because it allows programs to go forward with unreliable cost estimates and lengthy development cycles—not a sound basis for allocating resources and ensuring program stability. Invariably, DOD and the Congress end up continually shifting funds to and from programs—undermining well-performing programs to pay for poorly performing ones. At the program level, the key cause of poor outcomes is the consistent lack of disciplined analysis that would provide an understanding of what it would take to field a weapon system before system development. Our body of work in best practices has found that an executable business case is one that provides demonstrated evidence that (1) the identified needs are real and necessary and that they can best be met with the chosen concept and (2) the chosen concept can be developed and produced within existing resources—including technologies, funding, time, and management capacity. Although DOD has taken steps to revise its acquisition policies and guidance to reflect the benefits of a knowledge- based approach, we have found no evidence of widespread adoption of such an approach in the department. Our most recent assessment of major weapon systems found that the vast majority of programs began development with unexecutable business cases, and did not attain, or plan to achieve, adequate levels of knowledge before reaching design review and production start—the two key junctures in the process following development start (see figure 2). Knowledge gaps are largely the result of a lack of disciplined systems engineering analysis prior to beginning system development. Systems engineering translates customer needs into specific product requirements for which requisite technological, software, engineering, and production capabilities can be identified through requirements analysis, design, and testing. Early systems engineering provides knowledge that enables a developer to identify and resolve gaps before product development begins. Because the government often does not perform the proper up-front analysis to determine whether its needs can be met, significant contract cost increases can occur as the scope of the requirements change or become better understood by the government and contractor. Not only does DOD not typically conduct disciplined systems engineering prior to beginning system development, it has allowed new requirements to be added well into the acquisition cycle. The acquisition environment encourages launching ambitious product developments that embody more technical unknowns and less knowledge about the performance and production risks they entail. A new weapon system is not likely to be approved unless it promises the best capability and appears affordable within forecasted available funding levels. We have recently reported on the negative impact that poor systems engineering practices have had on several programs such as the Global Hawk Unmanned Aircraft System, F-22A, Expeditionary Fighting Vehicle, Joint Air-to-Surface Standoff Missile and others. With high levels of uncertainty about technologies, design, and requirements, program cost estimates and related funding needs are often understated, effectively setting programs up for failure. We recently compared the service and independent cost estimates for 20 major weapon system programs and found that the independent estimate was higher in nearly every case, but the difference between the estimates was typically not significant. We also found that both estimates were too low in most cases, and the knowledge needed to develop realistic cost estimates was often lacking. For example, program Cost Analysis Requirements Description documents—used to build the program cost estimate—are not typically based on demonstrated knowledge and therefore provide a shaky foundation for estimating costs. Cost estimates have proven to be off by billions of dollars in some of the programs we reviewed. For example, the initial Cost Analysis Improvement Group estimate for the Expeditionary Fighting Vehicle program was about $1.4 billion compared to a service estimate of about $1.1 billion, but development costs for the system are now expected to be close to $3.6 billion. Estimates this far off the mark do not provide the necessary foundation for sufficient funding commitments and realistic long-term planning. When DOD consistently allows unsound, unexecutable programs to pass through the requirements, funding, and acquisition processes, accountability suffers. Program managers cannot be held accountable when the programs they are handed already have a low probability of success. In addition, they are not empowered to make go or no-go decisions, have little control over funding, cannot veto new requirements, and they have little authority over staffing. At the same time, program managers frequently change during a program’s development. Limiting the length of development cycles would make it easier to more accurately estimate costs, predict the future funding needs, effectively allocate resources, and hold decision makers accountable. We have consistently emphasized the need for DOD’s weapon programs to establish shorter development cycles. DOD’s conventional acquisition process often requires as many as 10 or 15 years to get from program start to production. Such lengthy cycle times promote program instability—especially when considering DOD’s tendency to change requirements and funding as well as leadership. Constraining cycle times to 5 or 6 years would force programs to conduct more detailed systems engineering analyses, lend itself to fully funding programs to completion, and thereby increasing the likelihood that their requirements can be met within established time frames and available resources. An assessment of DOD’s acquisition system commissioned by the Deputy Secretary of Defense in 2006 similarly found that programs should be time-constrained to reduce pressure on investment accounts and increase funding stability for all programs. Our work shows that acquisition problems will likely persist until DOD provides a better foundation for buying the right things, the right way. This involves (1) maintaining the right mix of programs to invest in by making better decisions as to which programs should be pursued given existing and expected funding and, more importantly, deciding which programs should not be pursued; (2) ensuring that programs that are started can be executed by matching requirements with resources and locking in those requirements; and (3) making it clear that programs will then be executed based on knowledge and holding program managers responsible for that execution. We have made similar recommendations in past GAO reports, but DOD has disagreed with some and not fully implemented others. These changes will not be easy to make. They will require DOD to reexamine not only its acquisition process, but its requirement setting and funding processes as well. They will also require DOD to change how it views program success, and what is necessary to achieve success. This includes changing the environment and incentives that lead DOD and the military services to overpromise on capability and underestimate costs in order to sell new programs and capture the funding needed to start and sustain them. Finally, none of this will be achieved without a true partnership among the department, the military services, the Congress, and the defense industry. All of us must embrace the idea of change and work diligently to implement it. The first, and most important, step toward improving acquisition outcomes is implementing a new DOD-wide investment strategy for weapon systems. We have reported that DOD should develop an overarching strategy and decision-making processes that prioritize programs based on a balanced match between customer needs and available department resources---that is the dollars, technologies, time, and people needed to achieve these capabilities. We also recommended that capabilities not designated as a priority should be set out separately as desirable but not funded unless resources were both available and sustainable. This means that the decision makers responsible for weapon system requirements, funding, and acquisition execution must establish an investment strategy in concert. DOD’s Under Secretary of Defense for Acquisition, Technology and Logistics—DOD’s corporate leader for acquisition—should develop this strategy in concert with other senior leaders, for example, combatant commanders who would provide input on user needs; DOD’s comptroller and science and technology leaders, who would provide input on available resources; and acquisition executives from the military services, who could propose solutions. Finally, once priority decisions are made, Congress will need to enforce discipline through its legislative and oversight mechanisms. Once DOD has prioritized capabilities, it should work vigorously to make sure each new program can be executed before the acquisition begins. More specifically, this means assuring requirements for specific weapon systems are clearly defined and achievable given available resources and that all alternatives have been considered. System requirements should be agreed to by service acquisition executives as well as combatant commanders. Once programs begin, requirements should not change without assessing their potential disruption to the program and assuring that they can be accommodated within time and funding constraints. In addition, DOD should prove that technologies can work as intended before including them in acquisition programs. More ambitious technology development efforts should be assigned to the science and technology community until they are ready to be added to future generations of the product. DOD should also require the use of independent cost estimates as a basis for budgeting funds. Our work over the past 10 years has consistently shown when these basic steps are taken, programs are better positioned to be executed within cost and schedule. To keep programs executable, DOD should demand that all milestone decisions be based on quantifiable data and demonstrated knowledge. These data should cover critical program facets such as cost, schedule, technology readiness, design readiness, production readiness, and relationships with suppliers. Development should not be allowed to proceed until certain knowledge thresholds are met—for example, a high percentage of engineering drawings completed at critical design review. DOD’s current policies encourage these sorts of metrics to be used as a basis for decision making, but they do not demand it. DOD should also place boundaries on the time allowed for system development. To further ensure that programs can be executed, DOD should pursue an evolutionary path toward meeting user needs rather than attempting to satisfy all needs in a single step. This approach has been consistently used by successful commercial companies we have visited over the past decade because it provides program managers with more achievable requirements, which, in turn, facilitate shorter cycle times. With shorter cycle times, the companies we have studied have also been able to assure that program managers and senior leaders stay with programs throughout the duration of a program. DOD has policies that encourage evolutionary development, but programs often favor pursuing more revolutionary, exotic solutions that will attract funds and support. The department and, more importantly, the military services, tend to view success as capturing the funding needed to start and sustain a development program. In order to do this, they must overpromise capability and underestimate cost. In order for DOD to move forward, this view of success must change. World-class commercial firms identify success as developing products within cost estimates and delivering them on time in order to survive in the marketplace. This forces incremental, knowledge-based product development programs that improve capability as new technologies are matured. To strengthen accountability, DOD must also clearly delineate responsibilities among those who have a role in deciding what to buy as well as those who have role in executing, revising, and terminating programs. Within this context, rewards and incentives must be altered so that success can be viewed as delivering needed capability at the right price and the right time, rather than attracting and retaining support for numerous new and ongoing programs. To enable accountability to be exercised at the program level once a program begins, DOD will need to (1) match program manager tenure with development or the delivery of a product; (2) tailor career paths and performance management systems to incentivize longer tenures; (3) strengthen training and career paths as needed to ensure program managers have the right qualifications to manage the programs they are assigned to; (4) empower program managers to execute their programs, including an examination of whether and how much additional authority can be provided over funding, staffing, and approving requirements proposed after the start of a program; and (5) develop and provide automated tools to enhance management and oversight as well as to reduce the time required to prepare status information. DOD also should hold contractors accountable for results. As we have recommended, this means structuring contracts so that incentives actually motivate contractors to achieve desired acquisition outcomes and withholding fees when those goals are not met. Recognizing the need for more discipline and accountability in the acquisition process, Congress recently enacted legislation that, if followed, could result in a better chance to spend resources wisely. Likewise, DOD has recently begun to develop several initiatives, based in part on congressional direction and GAO recommendations that, if implemented properly, could also provide a foundation for establishing a well balanced investment strategy and sound, knowledge-based business cases for individual acquisition programs. Congress has enacted legislation that requires DOD to take certain actions which, if followed, could instill more discipline into the front-end of the acquisition process when key knowledge is gained and ultimately improve acquisition outcomes. For example, legislation enacted in 2006 and 2008 requires decision-makers to certify that specific levels of knowledge have been demonstrated at key decision points early in the acquisition process before programs can receive milestone approval for the technology development phase or the system development phase respectively. The 2006 legislation also requires programs to track unit cost growth against their original baseline estimates—and not only their most recent estimates—and requires an additional assessment of the program if certain cost growth thresholds are reached. Other key legislation requires DOD to report on the department’s strategies for balancing the allocation of funds and other resources among major defense acquisition programs, and to identify strategies for enhancing the role of program managers in carrying out acquisition programs. DOD has also initiated actions aimed at improving investment decisions and weapon system acquisition outcomes, based in part on congressional direction and GAO recommendations. Each of the initiatives is designed to enable more informed decisions by key department leaders well ahead of a program’s start, decisions that provide a closer match between each program’s requirements and the department’s resources. For example: DOD is experimenting with a new concept decision review, different acquisition approaches according to expected fielding times, and panels to review weapon system configuration changes that could adversely affect program cost and schedule. DOD is also testing portfolio management approaches in selected capability areas to facilitate more strategic choices about how to allocate resources across programs and also testing the use of capital budgeting as a potential means to stabilize program funding. In September 2007, the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics issued a policy memorandum to ensure weapons acquisition programs were able to demonstrate key knowledge elements that could inform future development and budget decisions. This policy directed pending and future programs to include acquisition strategies and funding that provide for contractors to develop technically mature prototypes prior to initiating system development, with the hope of reducing technical risk, validating designs and cost estimates, evaluating manufacturing processes, and refining requirements. DOD also plans to implement new practices that reflect past GAO recommendations intended to provide program managers more incentives, support, and stability. The department acknowledges that any actions taken to improve accountability must be based on a foundation whereby program managers can launch and manage programs toward greater performance, rather than focusing on maintaining support and funding for individual programs. DOD acquisition leaders have told us that any improvements to program managers’ performance hinge on the success of these departmental initiatives. In addition, DOD has taken actions to strengthen the link between award and incentive fees with desired program outcomes, which has the potential to increase the accountability of DOD programs for fees paid and of contractors for results achieved. If adopted and implemented properly these actions could provide a foundation for establishing sound, knowledge-based business cases for individual acquisition programs, and the means for executing those programs within established cost, schedule, and performance goals. DOD understands what it needs to do at the strategic and at the program level to improve acquisition outcomes. The strategic vision of the current Under Secretary of Defense for Acquisition, Technology and Logistics acknowledges the need to create a high-performing, boundary-less organization—one that seeks out new ideas and new ways of doing business and is prepared to question requirements and traditional processes. Past efforts have had similar goals, yet we continue to find all too often that DOD’s investment decisions are too service- and program- centric and that the military services overpromise capabilities and underestimate costs to capture the funding needed to start and sustain development programs. This acquisition environment has been characterized in many different ways. For example, some have described it as a “conspiracy of hope,” in which industry is encouraged to propose unrealistic cost estimates, optimistic performance, and understated technical risks during the proposal process and DOD is encouraged to accept these proposals as the foundation for new programs. Either way, it is clear that DOD’s implied definition of success is to attract funds for new programs and to keep funds for ongoing programs, no matter what the impact. DOD and the military services cannot continue to view success through this prism. More legislation can be enacted and policies can be written, but until DOD begins making better choices that reflect joint capability needs and matches requirements with resources, the acquisition environment will continue to produce poor outcomes. It should not be necessary to take extraordinary steps to ensure needed capabilities are delivered to the warfighter on time and within costs. Executable programs should be the natural outgrowth of a disciplined, knowledge-based process. While DOD’s current policy supports a knowledge-based, evolutionary approach to acquiring new weapons, in practice decisions made on individual programs often sacrifice knowledge and realism in favor of revolutionary solutions. Meaningful and lasting reform will not be achieved until DOD changes the acquisition environment and the incentives that drive the behavior of DOD decision-makers, the military services, program managers, and the defense industry. Finally, no real reform can be achieved without a true partnership among all these players and the Congress. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. For further questions about this statement, please contact Michael J. Sullivan at (202) 512-4841. Individuals making key contributions to this statement include Ron Schwenn, Assistant Director; Kenneth E. Patton, and Alyssa B. Weir. Defense Acquisitions: A Knowledge-Based Funding Approach Could Improve Major Weapon System Program Outcomes. GAO-08-619. Washington, D.C.: July 2, 2008. Defense Acquisitions: Better Weapon Program Outcomes Require Discipline, Accountability, and Fundamental Changes in the Acquisition Environment. GAO-08-782T. Washington, D.C.: June 3, 2008. Defense Acquisitions: Results of Annual Assessment of DOD Weapon Programs. GAO-08-674T. Washington, D.C.: April 29, 2008. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-08-467SP. Washington, D.C.: March 31, 2008. Best Practices: Increased Focus on Requirements and Oversight Needed to Improve DOD’s Acquisition Environment and Weapon System Quality. GAO-08-294. Washington, D.C.: Feb. 1, 2008. Cost Assessment Guide: Best Practices for Estimating and Managing Program Costs. GAO-07-1134SP, Washington, D.C.: July 2007. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-07-406SP. Washington, D.C.: March 30, 2007. Best Practices: An Integrated Portfolio Management Approach to Weapon System Investments Could Improve DOD’s Acquisition Outcomes. GAO-07-388, Washington, D.C.: March 30, 2007. Best Practices: Better Support of Weapon System Program Managers Needed to Improve Outcomes. GAO-06-110. Washington, D.C.: November 1, 2005. Defense Acquisitions: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD’s Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006. Defense Acquisitions: Stronger Management Practices Are Needed to Improve DOD’s Software-Intensive Weapon Acquisitions. GAO-04-393. Washington, D.C.: March 1, 2004. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems’ Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. Best Practices: A More Constructive Test Approach Is Key to Better Weapon System Outcomes. GAO/NSIAD-00-199. Washington, D.C.: July 31, 2000. Defense Acquisition: Employing Best Practices Can Shape Better Weapon System Decisions. GAO/T-NSIAD-00-137. Washington, D.C.: April 26, 2000. Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. GAO/NSIAD-99-162. Washington, D.C.: July 30, 1999. Defense Acquisitions: Best Commercial Practices Can Improve Program Outcomes. GAO/T-NSIAD-99-116. Washington, D.C.: March 17, 1999. Defense Acquisitions: Improved Program Outcomes Are Possible. GAO/T-NSIAD-98-123. Washington, D.C.: March 17, 1998. Best Practices: Successful Application to Weapon Acquisition Requires Changes in DOD’s Environment. GAO/NSIAD-98-56. Washington, D.C.: February 24, 1998. Best Practices: Commercial Quality Assurance Practices Offer Improvements for DOD. GAO/NSIAD-96-162. Washington, D.C.: August 26, 1996. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since 1990, GAO has designated the Department of Defense's (DOD) management of major weapon system acquisitions a high risk area. DOD has taken some action to improve acquisition outcomes, but its weapon programs continue to take longer, cost more, and deliver fewer capabilities than originally planned. These persistent problems--coupled with current operational demands--have impelled DOD to work outside of its traditional acquisition process to acquire equipment that meet urgent warfighter needs. Poor outcomes in DOD's weapon system programs reverberate across the entire federal government. Over the next 5 years, DOD expects to invest more than $357 billion on the development and procurement of major defense acquisition programs. Every dollar wasted on acquiring weapon systems is less money available for other priorities. This testimony describes DOD's current weapon system investment portfolio, the problems that contribute to cost and schedule increases, potential solutions based on past GAO recommendations, and recent legislative initiatives and DOD actions aimed at improving outcomes. It also provides some observations about what is needed for DOD to achieve lasting reform. The testimony is drawn from GAO's body of work on DOD's acquisition, requirements, and funding processes, as well as its most recent annual assessment of selected DOD weapon programs. DOD is not receiving expected returns on its large investment in weapon systems. Since fiscal year 2000, DOD significantly increased the number of major defense acquisition programs and its overall investment in them. During this same time period, the performance of the DOD portfolio has gotten worse. The total acquisition cost of DOD's 2007 portfolio of major programs under development or in production has grown by nearly $300 billion over initial estimates. Current programs are also experiencing, on average, a 21-month delay in delivering initial capabilities to the warfighter--often forcing DOD to spend additional funds on maintaining legacy systems. Systemic problems both at the strategic and at the program level underlie cost growth and schedule delays. At the strategic level, DOD's processes for identifying warfighter needs, allocating resources, and developing and procuring weapon systems--which together define DOD's overall weapon system investment strategy--are fragmented and broken. At the program level, weapon system programs are initiated without sufficient knowledge about system requirements, technology, and design maturity. Lacking such knowledge, managers rely on assumptions that are consistently too optimistic, exposing programs to significant and unnecessary risks and ultimately cost growth and schedule delays. Our work shows that acquisition problems will likely persist until DOD provides a better foundation for buying the right things, the right way. This involves making tough decisions as to which programs should be pursued, and more importantly, not pursued; making sure programs can be executed; locking in requirements before programs are ever started; and making it clear who is responsible for what and holding people accountable when responsibilities are not fulfilled. Recent congressionally mandated changes to the DOD acquisition system, as well as initiatives being pursued by the department, include positive steps that, if implemented properly, could provide a foundation for establishing a well balanced investment strategy, sound business cases for major weapon system acquisition programs, and a better chance to spend resources wisely. At the same time, DOD must begin making better choices that reflect joint capability needs and match requirements with resources. DOD investment decisions cannot continue to be dictated by the military services who propose programs that overpromise capabilities and underestimate costs to capture the funding needed to start and sustain development programs. To better ensure warfighter capabilities are delivered when needed and as promised, incentives must encourage a disciplined, knowledge-based approach, and a true partnership with shared goals must be developed among the department, the military services, the Congress, and the defense industry. |
This background section discusses (1) NNSA’s methods of accounting for and tracking costs, (2) the legislative requirement for NNSA to develop a plan to improve and integrate financial management, and (3) leading practices for strategic planning. NNSA contractors use different methods to account for costs, according to NNSA officials. In general, federal Cost Accounting Standards govern how federal contractors, including NNSA’s contractors account for costs. Federal Cost Accounting Standards provide direction for the consistent and equitable distribution of contractors’ costs to help federal agencies more accurately determine the actual costs of their contracts, projects, and programs. In particular, these standards establish requirements for the measurement, assignment, and allocation of costs to government contracts and provide criteria for the classification and allocation of indirect costs. To allocate costs to programs, contractors are to classify costs as either direct or indirect. Direct costs, are assigned to the benefitting program or programs. Indirect costs—those costs that cannot be assigned to a particular program such as costs for administration and site support—are to be accumulated, or grouped, into indirect cost pools. The contractor is to estimate the amount of indirect costs (accumulated into indirect cost pools) that will need to be distributed to each program and adjust the costs to actual costs by the end of the fiscal year. The contractor then is to distribute these costs based on a rate in accordance with the cost allocation model. The final program cost is the sum of the total direct costs plus the indirect costs distributed to the program. In implementing this allocation process, federal Cost Accounting Standards provide contractors with flexibility regarding the extent to which they identify incurred costs directly with a specific program and how they collect similar costs into indirect cost pools and allocate them among programs. Therefore, similar costs may be allocated differently because contractors’ cost allocation models differ. Specifically, cost models may differ in how they (1) classify costs as either direct or indirect, (2) accumulate these costs into indirect cost pools, and (3) distribute indirect costs to benefitting programs. Examples follow: Classification. Contractors may differ in how they classify costs as direct or indirect. For example, electricity and other utility costs are usually classified as indirect because they are not associated with a single program; however, electricity costs could be charged directly if, for example, a contractor installs a meter to track the electricity consumption in a building used solely by one program. Accumulation. Contractors may differ in how they accumulate indirect costs into indirect-cost pools. The number and type of cost pools used to accumulate indirect costs may vary. Distribution. Management and Operating (M&O) contractors may differ in how they distribute indirect costs accumulated into indirect- cost pools to programs. Because similar indirect costs can be allocated differently by different contractors and contractors may change the way they allocate indirect costs over time, it is difficult to compare contractor costs among sites. NNSA contractors also use different methods to track costs, according to NNSA officials. Specifically, NNSA contractors use different work breakdown structures (WBS) for tracking costs. A WBS is a method of deconstructing a program’s end product into successive levels (of detail) with smaller specific elements until the work is subdivided to a level suitable for management control. Within WBSs, cost elements capture discrete costs of a particular activity of work, such as labor, material, and fringe benefits. The use of different methods to track cost makes it difficult for NNSA and others to understand or compare costs for comparable activities across programs, contractors, and sites. For example, in 2011 we concluded that the cost savings that NNSA anticipated from the consolidation of the M&O contracts for two of its production sites were uncertain, in part, because historic cost data were not readily available for NNSA to use in its cost analysis. More specifically, we found that a key step in NNSA’s process for estimating savings—developing a comparative baseline of historical site costs––is a difficult and inexact process because DOE and NNSA contractors use different methods for tracking costs, and DOE’s cost data are of limited use in comparing sites. To obtain more consistent information on costs, some program offices have developed customized contractor cost reporting requirements and designed various systems to collect the cost information needed to manage their programs. For example, NNSA’s Office of Defense Programs began developing a data system in 2007—the Enterprise Portfolio Analysis Tool (EPAT)—to provide a consistent framework for managing the planning, programming, budgeting, and evaluation processes within Defense Programs. EPAT has evolved to incorporate a common WBS to allow managers to compare the budget estimates for analogous activities across the nuclear security enterprise regardless of which contractor or program is conducting them. However, NNSA officials told us that neither EPAT nor other customized, program-specific cost collection systems satisfy the section 3128 requirements for establishing an NNSA-wide approach to collecting cost information. According to NNSA officials, EPAT is not suitable because, among other reasons, it is not designed to reconcile with the DOE’s official accounting system. Section 3128 of the National Defense Authorization Act for Fiscal Year 2014 requires NNSA to develop a plan for improving and integrating financial management of the nuclear security enterprise. The Joint Explanatory Statement accompanying the act states that NNSA is to develop a plan for a common cost structure for activities at different sites with the purpose of comparing how efficiently different sites within the NNSA complex are carrying out similar activities. According to the act, matters to be included in the plan are: (1) an assessment of the feasibility of the plan (2) the estimated costs of carrying out the plan, (3) an assessment of the expected results of the plan, and (4) a timeline for implementation of the plan. In April 2014, to address the requirements of section 3128, NNSA formed a Lean Six Sigma team of 20 federal and contractor staff. In December 2014, the team produced a report that summarized the results of the team’s effort and included a number of recommendations to NNSA. According to the report, the team’s work also addressed separate but related requirements contained in a different section of the National Defense Authorization Act for Fiscal Year 2014. Specifically, section 3112 requires the NNSA Administrator to establish a Director for Cost Estimation and Program Evaluation to serve as the principal advisor for cost estimation and program evaluation activities, including development of a cost data collection and reporting system for designated NNSA programs and projects. Therefore, according to the December 2014 report, the team focused on both the requirements of section 3128 and the development of a cost data collection and reporting system required by section 3112. We have previously reported that, in developing new initiatives, agencies can benefit from following leading practices for strategic planning. Congress enacted the GPRA Modernization Act of 2010 (GPRAMA) to improve the efficiency and accountability of federal programs and, among other things, to update the requirement that federal agencies develop long-term strategic plans that include agencywide goals and strategies for achieving those goals. The Office of Management and Budget (OMB) has provided guidance in Circular A-11 to agencies on how to prepare these plans in accordance with the GPRA Modernization Act of 2010 (GPRAMA) requirements. We have reported in the past that, taken together, the strategic planning elements established under the Government Performance and Results Act of 1993 (GPRA), as updated by GPRAMA, and associated OMB guidance, along with practices we have identified, provide a framework of leading practices that can be used for strategic planning at lower levels within federal agencies, such as planning for individual divisions, programs, or initiatives. In February 2016, more than13 months after the statutory reporting deadline, NNSA produced a plan with the stated purpose of integrating and improving the financial management of the nuclear security enterprise. NNSA’s plan includes the four elements required under section 3128—a feasibility assessment, estimated costs, expected results, and an implementation timeline—but contains few details related to each of these elements. Feasibility assessment. NNSA’s plan includes a section entitled feasibility, which lists concerns regarding the feasibility of implementing the plan. The concerns listed are (1) the availability of resources, (2) the identification and implementation of an information technology solution, (3) the alignment of contractor systems and cost models with a new standardized reporting framework, and (4) that the use of the enterprise-wide approach may come at the expense of specific ad hoc reporting requests. NNSA’s feasibility assessment does not provide any specific information regarding these concerns. In addition, it does not provide information on potential costs or benefits, which will be needed to determine if the planned investment of time and other resources will yield the desired results. Estimated cost. The plan includes information on the estimated cost of its implementation plan. It states that total federal and contractor implementation costs are estimated to be between $10 million and $70 million, with the largest variable in the estimate being the cost of the information technology system requirements. NNSA’s cost estimate, however, provides no details regarding how the estimate was developed, beyond stating that it is based on professional judgment and input from NNSA’s contractors. Instead, the plan states that NNSA will provide a more precise estimate as the agency determines total staffing and information requirements. Expected results. The plan does not explicitly include a discussion of the expected results. However, the plan concludes that the collection of standard performance and cost data will improve both program and financial management through improved cost analysis, cost estimating, and program evaluation. The language in the conclusion, however, provides no details regarding the ways in which cost analysis, cost estimating, and program evaluation will be improved. Implementation timeline. The plan states that contractors will begin reporting detailed cost data into a common NNSA system in fiscal year 2019 and includes an implementation timeline for meeting this goal. (see fig. 1). Elements of the timeline include developing and implementing an enterprise-wide financial management policy, exploring the feasibility of a common WBS for NNSA, and standardizing direct and indirect cost elements. While NNSA’s plan says its timeline is “notional,” the plan provides a specific time frame of 3 to 5 years during which the core elements are expected to be completed. Moreover, the plan does not identify which elements are considered the “core elements” or explain the reasoning behind the implementation timeframe of 3 to 5 years. As of December 2016, NNSA has fully implemented one of the elements included in the timeline by creating and filling the position of Program Director of Financial Integration. Other elements that were scheduled under NNSA’s implementation timeline to begin in fiscal year 2015 and early fiscal year 2016 were not started according to the timeline but are now underway. For example, the timeline indicates that NNSA will begin developing and implementing an enterprise-wide financial management policy during the second half of fiscal year 2015 but this effort was not initiated until October 2016. The elements listed in NNSA’s timeline correspond with the recommendations included in the December 2014 internal NNSA report The report recommended that NNSA: establish a clear and consistent program management policy addressing common program management data reporting requirements for all work performed with NNSA funding; establish a standard WBS for all work performed within the nuclear establish a clear and consistent policy and methodology for identifying base capabilities and programs of record that is systematically applied; report financial data by standardized labor categories, labor hours, functional elements, and cost elements; enhance or develop an agency data Warehouse and analytical tools; establish a knowledge management function; and appoint an “Executive Champion” to implement the recommendations and plan. However, NNSA’s plan does not include additional details regarding each of the elements listed in its timeline or contain many of the details included in its internal agency report. Instead of using the information and recommendations from the December 2014 report as a basis for developing an actionable implementation plan, NNSA summarized portions of the report and issued the summary document as its official plan. Moreover, differences between the internal report and the published plan are not discussed or explained—potentially creating ambiguity as to NNSA’s planned approach. For example, the internal agency report recommends that NNSA establish a standard WBS for all work performed within the nuclear security enterprise. NNSA’s plan, however, states that NNSA will explore the feasibility of a common WBS—which leaves open the option of not creating a common WBS. NNSA’s plan does not fully incorporate leading practices, which limits its usefulness as a planning tool and limits the effectiveness of NNSA’s effort to provide meaningful financial information to Congress and other stakeholders. Reliable financial information is important for making programmatic and budgetary decisions and providing appropriate oversight. To improve the consistency of this information, as discussed previously, Congress directed NNSA to develop a financial integration plan. In developing plans for implementing new initiatives, agencies— including NNSA—can benefit from following leading practices for strategic planning. These leading practices include (1) defining the mission and goals of a program or initiative, (2) defining strategies and identifying resources needed to achieve goals, (3) ensuring leadership involvement and accountability, and (4) involving stakeholders in planning. We highlight these four practices because NNSA’s financial improvement and integration initiative is still being developed, and these practices are particularly relevant to the early stages of developing a strategic plan. NNSA’s plan, however, does not fully incorporate any of these leading practices. Table 1 shows our assessment of the extent to which NNSA used these practices in developing its plan for improving and integrating its financial management. Mission and goals. NNSA’s plan does not explicitly include a mission statement or strategic goals; as a result, it is difficult to understand fully what NNSA’s plan is intended to do and how it will do it. More specifically, it’s unclear if the sole purpose of the plan is to satisfy section 3128 requirements and the information needs of Congress or if it is also intended to satisfy the information needs of NNSA decision makers. For example, the plan’s executive summary states that NNSA developed the plan to address specific requirements set forth in section 3128 of the National Defense Authorization Act for Fiscal Year 2014; however, information presented in the plan’s conclusions suggests that the plan may also be intended to satisfy the information needs of NNSA decision makers. In addition, while the plan concludes that the collection of standard performance and cost data will improve both program and financial management through improved cost analysis, cost estimating, and program evaluation, it does not explicitly present this as a goal either in the conclusions or earlier in the plan. Strategies and resources needed to achieve goals. NNSA’s plan does not include strategies to address management challenges or describe the specific resources needed to meet goals. We have previously reported that when developing a strategic plan, it is particularly important for agencies to define strategies that address management challenges that threaten their ability to meet long-term strategic goals and include a description of the resources, actions, time frames, roles, and responsibilities needed to meet established goals. NNSA’s plan includes a list of the challenges NNSA will face during implementation of the plan—including challenges related to the availability of resources and identifying and implementing an information technology solution—and provides a “notional” implementation timeline with milestones for certain significant actions. However, beyond the high- level cost estimate provided, NNSA’s plan does not include a description of the specific resources needed to meet specific elements of the plan or define strategies that address these management challenges. Leadership involvement and accountability. The CIOs for NNSA and DOE were not involved in developing the NNSA financial integration plan. An agency’s senior leadership is key to ensuring that strategic planning becomes the basis for day-to-day operations. The NNSA CIO told us that he was aware of Section 3128 but was not involved in developing the plan or determining how to identify a system to meet the section 3128 requirements. NNSA officials told us that they did not think it was necessary to get the NNSA or DOE CIOs involved with the team because the agency had yet to identify the requirements for a new information technology system. However, this assertion is inconsistent with information contained in NNSA’s December 2014 Lean Six Sigma report which states that a sub-team was formed to determine the data system requirements for collecting and reporting costs that would satisfy sections 3128 and 3112 of the National Defense Authorization Act for Fiscal Year 2014. Stakeholder involvement. Key stakeholders, such as program managers were not involved in developing the mission, goals, or strategies associated with the plan. We have previously reported that it is important for agencies to involve stakeholders in developing their mission, goals, and strategies to help ensure that the highest priorities are targeted. However, NNSA did not involve key stakeholders in the development of its financial integration plan. The Lean Six Sigma team that NNSA formed to develop the plan was widely represented in terms of geographic location and included representatives from NNSA’s budget, financial management, information technology, and cost-estimating communities, but key stakeholders, such as federal program managers were not included in the effort. NNSA officials told us that the biggest challenge NNSA will face in implementing the plan will be overcoming cultural resistance to change and the parochial interests of different program offices—particularly for program offices that have developed their own independent technology solutions for collecting the cost data they need to manage. Yet, the involvement from NNSA program management offices was limited to budget and finance staff. According to NNSA officials, program managers were invited to participate in the Lean Six Sigma team but none volunteered. Moreover, the federal program manager from one of NNSA’s largest programs—the B-61 Life Extension Program—told us that he was not involved in the team and was only vaguely aware of the section 3128 requirement for NNSA to develop a financial integration plan. Given that program managers are a primary user of managerial cost information, obtaining their perspectives and getting their buy-in is important. NNSA also did not solicit input from congressional staff in the development of the plan. NNSA officials told us that the only time they met with congressional staff regarding Section 3128 was in May 2015 to brief staff on their progress, but this was after they were done studying the issue. Because NNSA’s plan does not fully incorporate leading strategic planning practices, such as those included in table 1, it has not provided a useful road map for guiding NNSA’s effort. According to the NNSA official who is responsible for overseeing the execution of the plan—the Director of Financial Integration—the plan NNSA submitted to Congress was not a comprehensive or actionable plan. In addition, other NNSA officials told us that the plan they submitted to Congress was never intended to provide a road map to guide their efforts. More specifically, they said that they disagree with the premise that the plan submitted to Congress should have been a detailed, operational plan with specific milestones and extensive information about costs, schedule, and risks. Instead, according to these NNSA officials, the purpose of the plan was to identify general principles and a strategic vision for achieving financial integration. The Director of Financial Integration, who accepted this position in January 2016 shortly before NNSA issued its plan, told us in July and in November 2016 that he was in the process of developing an actionable plan with specific goals, objectives, and milestones and acknowledged that key stakeholders, such as program managers, would need to be involved in the process. However, he did not tell us when the more detailed, actionable plan would be finalized and, on the basis of planning documents he provided us; it is unclear if the new plan will incorporate leading practices. Until an actionable plan is in place that incorporates leading strategic planning practices, NNSA cannot be assured that it has established a roadmap to effectively guide and assess the success of this initiative. Effective management and oversight of the contracts, projects, and programs that support NNSA’s mission are dependent upon the availability of reliable enterprise-wide management information and, as required, NNSA has provided Congress with a plan for improving and integrating its financial management. Although NNSA’s plan includes the elements required under section 3128, details are limited, and it appears that this plan will not provide the framework needed to guide NNSA’s efforts and ensure that Congress and other stakeholders have accurate, reliable cost information that can be compared across programs, contractors, and sites. In particular, NNSA’s plan has not provided an effective framework for guiding NNSA’s effort because it does not incorporate leading planning practices including (1) defining the missions and goals of a program or initiative, (2) defining strategies and identifying resources needed to achieve goals, (3) ensuring leadership involvement, and (4) involving stakeholders in planning. In addition, differences between NNSA’s internal report and the published plan are not discussed or explained—potentially creating ambiguity as to NNSA’s planned approach. NNSA officials have acknowledged that the plan they submitted to Congress is not a comprehensive or actionable plan. The Director of Financial Integration has taken steps to develop an actionable plan, but it is unclear when the plan will be finalized or the extent to which it will incorporate leading practices. Until a plan that incorporates leading practices is in place, NNSA cannot be assured that its efforts will result in a cost collection model that satisfies the information needs of Congress or improves program and financial management through improved cost analysis, cost estimating, and program evaluation. Such information would better position NNSA to address longstanding contract and project management challenges. Without proper planning NNSA could waste valuable resources, time, and effort in its financial management improvement and integration process. To help provide a roadmap to effectively guide NNSA’s effort to integrate and improve its financial management, we recommend that the NNSA Administrator direct the Program Director of Financial Integration to develop a plan for producing cost information that fully incorporates leading practices. We provided a draft of this report to NNSA for its review and comment. NNSA provided written comments, which are reproduced in appendix I, and technical comments that were incorporated as appropriate. In its written comments, NNSA stated that it will update the plan and address the items GAO identified. Although NNSA has agreed to implement our recommendation, in its written comments, NNSA states that given the plan’s “early level of maturity” GAO’s evaluation of the plan against leading practices resulted in a somewhat misleading conclusion. We disagree. The purpose of a plan is to provide a roadmap to guide the agency’s effort. Regardless of the plan’s maturity, incorporating leading practices for strategic planning can improve its utility. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of NNSA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Diane LoFaro (Assistant Director), Mike LaForge (Assistant Director), Cheryl Harris; Charles Jones, and Mark Keenan made key contributions to this report. | Effective management and oversight of contracts, projects, and programs are dependent upon the availability of reliable enterprise-wide financial management information. Such information is also needed by Congress to carry out its oversight responsibilities and make budgetary decisions. However, meaningful cost analysis of NNSA programs, including comparisons across programs, contractors, and sites, is not possible because NNSA's contractors use different methods of accounting for and tracking costs. The National Defense Authorization Act for Fiscal Year 2014 required NNSA to develop and submit to Congress a plan to improve and integrate its financial management. An explanatory statement accompanying the act included a provision for GAO to review the adequacy of NNSA's plan. This report evaluates the extent to which NNSA's plan (1) addresses the objectives of the act and (2) follows leading practices for planning. GAO reviewed NNSA's plan and compared it with legislative requirements and leading practices for planning and interviewed NNSA officials. On February 5, 2016, more than 13 months after the statutory reporting deadline, the National Nuclear Security Administration (NNSA) submitted to Congress a plan for improving and integrating its financial management. The plan includes the four elements required by the National Defense Authorization Act for Fiscal Year 2014—a feasibility assessment, estimated costs, expected results, and an implementation timeline—but contains few details related to each of these elements. For example, NNSA's feasibility assessment includes a list of implementation concerns—including general concerns related to the availability of resources and to identifying and implementing an information technology solution—but does not provide any specific information regarding these concerns. In addition, NNSA's plan includes a cost estimate of between $10 million and $70 million but provides no details on how the estimate was developed beyond stating that it is based on professional judgment and input from NNSA's contractors. The plan also includes a “notional” implementation timeline that calls for the plan's core elements to be completed in 3 to 5 years but does not include details on which elements are considered core elements. NNSA's financial integration plan does not fully incorporate leading strategic planning practices, which limits its usefulness as a planning tool as well as the effectiveness of NNSA's effort to provide meaningful financial information to Congress and other stakeholders. As GAO has reported previously, in developing plans for implementing new initiatives, agencies can benefit from following leading practices for strategic planning. These leading practices include (1) defining the missions and goals of a program or initiative, (2) defining strategies and identifying resources needed to achieve goals, (3) ensuring leadership involvement, and (4) involving stakeholders in planning. However, NNSA's plan does not fully incorporate any of these leading practices. For example, beyond the high-level cost estimate provided, NNSA's plan does not include a description of the specific resources needed to meet specific elements of the plan or define strategies that address management challenges, including the implementation concerns identified in the plan's feasibility assessment. In addition, NNSA did not involve key stakeholders in developing its plan. Because NNSA's plan does not incorporate leading strategic planning practices, it has not provided a useful road map for guiding NNSA's effort. NNSA officials told GAO that the plan they submitted to Congress was never intended to provide a road map to guide their efforts. Instead, they said the purpose of the plan was to identify general principles and a strategic vision for achieving financial integration. The NNSA official responsible for overseeing the plan's execution told GAO, he has begun to develop a more comprehensive and actionable plan to guide NNSA's effort. However, it is unclear when the new plan will be finalized or the extent to which it will incorporate leading practices. Until a plan is in place that incorporates leading strategic planning practices, NNSA cannot be assured that its efforts will result in a cost collection tool that produces reliable enterprise-wide information that satisfies the needs of Congress and program managers. Such information would better position NNSA to address long-standing contract and project management challenges. Without proper planning, NNSA could waste valuable resources, time, and effort on its financial integration effort. To provide a road map to guide NNSA's financial management improvement effort, GAO recommends that the NNSA Administrator direct the Program Director of Financial Integration to develop a plan for producing cost information that fully incorporates leading planning practices. NNSA agreed to update its plan and address the items GAO identified. |
In part to improve the availability of information on and management of DOD’s acquisition of services, in fiscal year 2002 Congress enacted section 2330a of title 10 of the U.S. Code, which required the Secretary of Defense to establish a data collection system to provide management information on each purchase of services by a military department or defense agency. The information to be collected includes, among other things, the services purchased, the total dollar amount of the purchase, the form of contracting action used to make the purchase, and the extent of competition provided in making the purchase. The inventory is to include a number of the function and missions performed by the contractor; the contracting organization, the component of DOD administering the contract, and the organization whose requirements are being met through contractor performance of the function; the funding source for the contract by appropriation and operating agency; the fiscal year the activity first appeared on an inventory; the number of contractor employees (expressed as FTEs) for direct labor, using direct labor hours and associated cost data collected from contractors; a determination of whether the contract pursuant to which the activity is performed is a personal services contract, a summary of the information required by section 2330a(a) of title 10 of the U.S. Code. As implemented by DOD, components are to compile annual inventories of activities performed on their behalf by contractors and submit them to AT&L, which is to formally submit a consolidated DOD inventory to Congress no later than June 30. Since this provision was implemented DOD-wide, the primary source used by DOD components, with the exception of the Army, to compile their inventories has been FPDS-NG. The Army developed its Contractor Manpower Reporting Application (CMRA) in 2005 to collect information on labor-hour expenditures by function, funding source, and mission supported on contracted efforts, and has used CMRA as the basis for its inventory. CMRA captures data directly reported by contractors on services performed at the contract line item level, including information on the direct labor dollars, direct labor hours, total invoiced dollars, the functions and mission performed, and the organizational unit on whose behalf the services are being performed. In instances where contractors are providing different services under the same order, or are providing services at multiple locations, contractors can enter additional records in CMRA to capture information associated with each type of service or location. It also allows for the identification of services provided under contracts for goods. Within 30 days after it is submitted to Congress, the inventory is to be made public. Within 90 days of the date on which the inventory is submitted to Congress, the Secretaries of the military department and heads of the defense agencies responsible for activities in the inventory are to complete a review of the contracts and activities for which they are responsible and ensure that any personal services contracts in the inventory were properly entered into and performed appropriately; that the activities in the inventory do not include inherently governmental functions; that to the maximum extent practicable, the activities on the list do not include any functions closely associated with inherently governmental functions; and that activities that should be considered for conversion to DOD civilian performance have been identified. In January 2011, Congress amended section 2330a(c) of title 10 of the U.S. Code to specify that P&R, AT&L, and Comptroller are responsible for Section 2330a(c) was also issuing guidance for compiling the inventory. amended to state that DOD is to use direct labor hours and associated cost data collected from contractors as the basis for the number of contactor FTEs identified in the inventory, though it provided that DOD may use estimates where such data are not available and cannot reasonably be made available in a timely manner. Congress provided further direction on the collection of FTE information for contractor employees in the Department of Defense and Full-Year Continuing Appropriations Act, 2011 by providing not less than $2 million to both the Navy and Air Force to leverage the Army’s CMRA to document the number of full-time contractor employees, or their equivalent in the inventory. The services and the directors of the defense agencies in coordination with P&R were to report to the Congressional defense committees within 60 days of enactment of that act on their plans for documenting the number of full-time contractor employees or their equivalent. In December 2011, section 936 of the National Defense Authorization Act for Fiscal Year 2012 amended section 2330a of title 10 of the U.S. Code to clarify the types of contracted services to be inventoried, including contracts for goods to the extent services are a significant component of performance, as identified in a separate line item of a contract. This section also directed the secretary of the military department or head of the defense agency responsible for activities in the inventory to develop a plan, including an enforcement mechanism and approval process, to provide for the use of the inventory to make determinations regarding the most appropriate mix of military, civilian, and contractor personnel to perform its mission; ensure that the inventory is used to inform strategic workforce provide for appropriate consideration of the conversion of certain planning; facilitate the use of the inventory for budgetary purposes; and activities, to include those closely associated with inherently governmental functions, critical functions, and acquisition workforce functions, to performance by government employees. Section 2463 of title 10 of the U.S. Code requires the Secretary of Defense to make use of the inventory of contracted services to identify certain functions performed by contractors, to include closely associated with inherently governmental functions, critical functions and acquisition workforce functions, and ensure that special consideration is given to converting those functions to civilian performance. Further, the National Defense Authorization Act for Fiscal Year 2010 provided for a new section 115b in title 10 of the U.S. Code that requires DOD to annually submit to the defense committees a strategic workforce plan to shape and improve the civilian workforce. Among other requirements, the plan is to include an assessment of the appropriate mix of military, civilian, and contractor personnel capabilities. P&R is responsible for developing and implementing the strategic plan in consultation with AT&L. The act also added section 235 to title 10 of the U.S. Code, which requires that the Secretary of Defense include (in the budget justification materials submitted to Congress) information that clearly and separately identifies both the amount requested for the procurement of contract services for each DOD component, installation, or activity and the number of contractor employee full-time equivalents projected and justified for each DOD component, installation, or activity based on the inventory of contracts for services and associated reviews. Collectively, these statutory requirements mandate the use of the inventory and the associated review process to enhance the ability of DOD to identify and track the services provided by contractors, achieve accountability for the contractor sector of its total workforce, help identify functions for possible conversion from contractor performance to DOD civilian performance, support the development of DOD’s annual strategic workforce plan, and project and justify the number of contractor FTEs included in its annual budget justification materials. Figure 1 illustrates the relationship among the related statutory requirements. Over the past year and a half, DOD has taken its first steps to implement a November 2011 plan to collect contractor manpower data from contractors. These steps included directing components to start collecting direct labor hours and associated costs from contractors and initiating efforts to develop and implement a department-wide data collection system based on the Army’s CMRA to collect and store inventory data, including contractor manpower data. AT&L and P&R officials estimate that the new system will be available in fiscal year 2014, with DOD components reporting on most of their contracted services by fiscal year 2016. DOD, however, is still working on key decisions related to security, funding, and other technological issues and has not developed an implementation plan with specific time frames or milestones to help ensure DOD remains on track to develop its planned data collection system. For the fiscal year 2011 inventory, DOD components generally used the same compilation processes used in the previous year. As such, with the exception of the Army, which already collects contractor manpower data and other key information using its CMRA data collection system, the remaining components obtained most of their inventory information from FPDS-NG, a system that does not collect contractor FTE information and has other limitations, which limit its utility for purposes of compiling a complete and accurate inventory. DOD has taken steps to meet legislative requirements to develop a data collection system that provides management insight on contracted services and collects the required data points for each contracted service, including information on the number of contractor FTEs. In April 2011, Congress passed the Department of Defense and Full-Year Continuing Appropriations Act, 2011, which among other things, required the secretaries of the military departments and the directors of the defense agencies, in coordination with P&R, to submit plans for documenting the number of contractor FTEs. In response, in November 2011 DOD issued a plan to collect contractor manpower data and document contractor FTEs, and provided for short-term and long-term actions intended to meet the requirements of 10 U.S.C. § 2330a. DOD stated that it was committed to assisting components as they implement their plans, especially those currently without reporting processes or infrastructure in place, by leveraging the Army’s CMRA system, processes, best practices, and tools to the maximum extent possible. Part of the long-term plan is to develop a comprehensive instruction for components to use on the development, review, and use of the inventories and for the Office of the Deputy Chief Management Officer, P&R, and other stakeholders to form a working group to develop and implement a common data system to collect and house the information required for the inventory, including contractor manpower data. DOD noted in its plan that it expects the data system to be operational and DOD components to be reporting on most of their service contracts by fiscal year 2016. Over the past year and a half, DOD took a number of actions to implement its November 2011 plan. DOD published a Federal Register notice as required by the Paperwork Reduction Act, in February 2012, seeking public comment on its proposal to allow DOD components to collect certain key information directly from contractors, including the number of direct labor hours associated with the provision of each service. The Office of Management and Budget approved DOD’s request in May 2012. In November 2012, the Under Secretaries for P&R and AT&L issued a joint memorandum that instructed components to ensure all actions to procure contracted services, including contracts for goods with defined requirements for services, include a requirement for the contractor to report all contractor labor hours required for performance of the services provided. The joint memorandum further instructed that data will be reported using an Enterprise-wide Contractor Manpower Reporting Application (eCMRA) and provided that the eCMRA website would be available to receive data to support the fiscal year 2013 inventory. Additionally, standard language, which was developed in a collaborative effort between AT&L, P&R, and the DOD components, is to be included in new statements of work and modifications to existing contracts. According to AT&L and P&R officials, DOD expects more than 270,000 contracts or orders to be modified across the department, with most contracts containing the language by fiscal year 2016. The Navy and Air Force began implementing the requirement to collect direct labor hours from contractors by modifying or including the reporting requirement in all their current and future service contracts in October and November 2012, respectively. The Army has previously included this requirement in its contracts. AT&L officials have also been working to develop a new provision to implement the reporting requirements in the Defense Federal Acquisition Regulation Supplement. As part of their efforts, they have initiated a case to the Defense Acquisition Regulation Council, but as of April 2013, the case is still pending. Further, the Navy and Air Force have each taken steps to develop their own interim system to collect and store contractor manpower data based on the Army’s CMRA system. According to P&R and AT&L officials, the remaining DOD components will all share an interim CMRA-based system to collect and store their contracted services data. The Army and the Air Force will provide support for this component shared system; however, individual components will retain responsibility for ensuring the accuracy of the contracted services information reported into the CMRA system, which will later be used to compile the inventories. In January 2013, P&R, in collaboration with DOD’s Deputy Chief Management Officer, initiated efforts to develop and implement the department-wide eCMRA system that will replace the interim CMRA systems to collect and store information about all contracted services, including contractor reported labor hours and associated costs. The working group, comprised of officials from the Deputy Chief Management Office—whose role is to act as facilitator for the implementation of the system—and representatives from the military departments, has met several times as of April 2013 to discuss features of the new system. P&R and AT&L officials stated that the department remains on track to meet the time frames outlined in DOD’s November 2011 plan and indicated that they anticipate having the data collection system operational by fiscal year 2014. According to working group officials, however, the working group is still working on key decisions related to security, funding, and other technological issues and has not developed an implementation plan with specific time frames or milestones to help ensure DOD remains on track to meet its goals. Based on our discussions with several working group members, there is an unresolved issue about whether DOD components should use one department-wide system as planned or continue using the individual interim CMRA systems that have been developed. Some working group officials stated that using the multiple CMRA systems currently available was sufficient and would allow DOD to report accurate inventory data sooner. Conversely, other working group officials stated that a department-wide system would be less expensive to operate and upgrade and would be less of a burden on contractors because they would only have to interface with one DOD system. Working group officials did not provide any time frames for which a resolution to the issue would be made. Doing so in a timely fashion, as well as developing a plan of action with anticipated time frames and necessary resources, as we have previously recommended, would help facilitate the department’s stated intent of collecting contractor manpower data. In December 2011, AT&L and P&R issued guidance for the submission of the fiscal year 2011 inventory of contracted services. The guidance instructed the military departments and DOD components to use all reporting tools at their disposal to compile their inventories. In addition, it noted that the Director, Defense Procurement and Acquisition Policy would provide each component that has acquisition authority with a data set from the FPDS-NG that should be used to cross check the information that the components had compiled. The December 2011 guidance noted that most components were not currently collecting direct labor hours from contractors; therefore it identified five methodologies components could use singularly or in combination to estimate or calculate the number of contractor FTEs in their inventories. For example, components could collect direct labor hour information from contractors, or calculate the number of contractor FTEs by using a formula provided by P&R, which was based in part from information extrapolated from the manpower data collected by the Army from its contractors. Thirty-one DOD components submitted inventories for fiscal year 2011, collectively reporting an estimated 710,000 contractor FTEs providing services to DOD with obligations totaling about $145 billion (see table 1). A component’s inventory submission may encompass contracts awarded on behalf of another component. For example, contracts for the Defense Acquisition University are reported by the Office of the Director, Administration and Management. In comparison, for fiscal year 2010, DOD reported that 23 components submitted inventories, and estimated that about 623,000 contractor FTEs provided services with obligations totaling about $121 billion. DOD officials cautioned against comparing the number of contractor FTEs for fiscal year 2010 and fiscal year 2011 because components used different methodologies to estimate contractor FTEs, there were changes in the types of services that were to be included in the inventories, and other factors. For example, for the fiscal year 2010 inventory, DOD estimated contractor FTEs using one methodology for all components, other than the Army, while for the fiscal year 2011 inventory, those components used a variety of methodologies to estimate contractor FTEs. Of the 31 components that submitted a fiscal year 2011 inventory of contracted services, only 2 components reported that they collected direct labor hour information from contractors—the Army, which uses CMRA, and the Defense Test Resource Management Center. Of the remaining components, 18—including the Air Force and Navy, which together represent almost half of the contractor FTEs in the inventory—reported that they used information extrapolated from Army manpower data and FPDS-NG to calculate an estimate of the number of contractor FTEs; 6 components reported that they used a variety of methodologies, including information from independent government estimates and contractor technical proposals; and 5 components did not identify the methodology used to estimate the number of contractor FTEs. As we have previously reported, the FPDS-NG system has several limitations that limit its utility for purposes of compiling a complete and accurate inventory, including not being able to identify and record more than one type of service purchased for each contracting action entered into the system, not being able to capture any services performed under contracts that are predominantly for supplies, not being able to identify the requiring activity specifically, not being able to determine the number of contractor FTEs used to perform each service. Over the years, DOD has made a number of changes to address some of the limitations posed by using FPDS-NG, but not all of the limitations have been totally addressed. According to AT&L and P&R officials, the Army’s CMRA system, as well as the CMRA-based systems now being used by the Air Force and Navy, will help the military departments overcome a number of the FPDS-NG limitations. In addition to the limitations posed by using FPDS-NG as a source for compiling the fiscal year 2011 inventories, DOD experienced challenges with correctly identifying all services that were to be reported in the inventory. According to P&R and AT&L officials, the Air Force identified omissions of about $8 billion during its final review, for which the Air Force conducted a cross-check of its inventory data by comparing the FPDS-NG data set provided by Director, Defense Procurement and Acquisition Policy to its financial management system. AT&L and P&R officials noted that the omissions were primarily for services provided to the Air Force pursuant to a contract action conducted by other DOD components, and services provided to other DOD components pursuant to contract actions conducted by the Air Force. AT&L and P&R officials explained that they decided to report the omissions as “other DOD inputs” to avoid any further delays in submitting DOD’s fiscal year 2011 inventory to Congress. According to Navy officials, the Navy did not identify errors, but noted that they did not use other systems to cross-check their inventory data. Army officials told us that the Army reported contracted services for which the Army was the requiring organization, but stated that other components may not have reported contracted services performed on their behalf pursuant to contract actions for which the Army was the procuring agency. GAO-12-357. Consistent with DOD’s December 2011 guidance on the inventory review, most components certified that they conducted the inventory review, but provided only limited information of their review methodologies, results of their review, or use of the inventory to inform annual program reviews and budget processes. As of April 2013, 29 of the 31 components certified that they had completed a review of their inventory. AT&L and P&R officials stated that the requirement to submit certification letters represented a significant improvement over prior years’ reviews when DOD could not determine whether or not the required reviews were conducted and believed that the letters provided useful insights into the components’ processes and methodologies for conducting the reviews. Our analysis indicates, however, that none of the components reported on all six elements required in the guidance. For example, about half of the component letters provided limited or no information on the methodology used to perform the reviews. In addition, components provided limited information on their efforts to ensure appropriate government control when contractors were performing closely associated with inherently governmental functions. Further, while the Army and Air Force identified instances where contractors were performing inherently governmental functions and unauthorized personal services, they did not report whether they fully resolved these issues. In December 2011, AT&L and P&R issued guidance to components directing them to review at least 50 percent of their inventories and to the maximum extent possible, give priority to contracts not previously reviewed or those that may present a higher risk of inappropriate performance. In addition, heads of components were required to provide a letter to P&R by November 25, 2012, certifying completion of the inventory review and at a minimum include a discussion on the following six elements: an explanation of the methodology used to conduct the reviews and criteria for selection of contracts to review; delineation of the results in accordance with all applicable title 10 provisions and the December 2011 guidance; the identification of any inherently governmental functions or unauthorized personal services contracts, with a plan of action to either divest or realign such functions to government performance; the identification of contracts under which closely associated with inherently governmental functions are being performed and an explanation of steps taken to ensure appropriate government control and oversight of such functions, or if necessary, a plan to either divest or realign such functions to government performance; the identification of contracted services that are exempt from private sector performance in accordance with DOD Instruction 1100.22, which establishes policies and procedures for determining the appropriate manpower mix; require special consideration under 10 U.S.C. § 2463; or are being considered for cost reasons, to be realigned to government performance;the actions being taken or considered with regards to annual program reviews and budget processes to ensure appropriate reallocation of resources based on the reviews conducted. According to AT&L and P&R officials, the letters were intended to ensure that the components conducted the required review of their inventories, and documented the extent to which contractors were found to be performing certain functions to include inherently governmental and closely associated with inherently governmental and, to the extent necessary, provided for a plan to realign performance of such functions to government performance. DOD could also modify the statement of work or the manner of its performance to ensure that the work performed is not inherently government or divest or discontinue the work. In cases where contractors are performing activities that are closely associated with inherently governmental functions, DOD is required to ensure appropriate government control and oversight of such functions. As of April 2013, 29 of the 31 components required to review their inventories had submitted a certification letter, while the Air Force submitted an interim letter based on a review of 30 percent of the contracts that it had completed at that time. The Air Force provided us with updated figures based on their review of about 80 percent of their contract actions, which we incorporated in this report. However, the Air Force has yet to submit a formal letter to P&R certifying the results of its review. AT&L and P&R officials stated that the requirement to submit certification letters represented a significant improvement over prior years’ reviews when DOD could not determine whether or not the required reviews were conducted and believed the letters provided useful insights into the components’ processes and methodologies for conducting the reviews. Our analysis of the 29 component certification letters found that none discussed all six elements required in guidance. Further, certification letters varied significantly in terms of the information and insights provided for the methodologies components used to review their inventories, the results of the reviews, and use of the inventory to inform annual program reviews and budget processes, as illustrated in the following examples. Methodology and Selection Criteria: Sixteen of the 29 components provided information on both the criteria and methodology used to conduct their reviews. These components represent about 38 percent of the total contractor FTEs submitted in the inventory. However, the level of detail provided in the certification letters varied. For example, the Army, which noted in its certification letter that it reviewed more than 50 percent of its contracted functions, provided a detailed explanation of its selection criteria and review methodology. In its inventory submission, the Army explained that it has a two- pronged approach to reviewing the activities in the inventory. First, it uses a pre-award process that includes detailed checklists to help assess whether the proposed contract includes services that are inherently governmental functions or inappropriate personal services, and to identify services that are closely associated with inherently governmental functions. For example, to identify work that is closely associated with inherently governmental functions, the checklists ask whether the contractor will be providing services related to budget preparation, feasibility studies, and acquisition planning, among others. Second, it uses a post-award review, the Panel for Documentation of Contractors, to review information provided by commands to make certain determinations such as whether a contractor’s performance of closely associated with inherently governmental functions has evolved into the performance of inherently governmental functions. The panel also evaluates whether sufficient capacity exists to oversee the contracted workforce. This process allowed the Army to identify over 900 contractor FTEs performing inherently governmental functions and over 44,000 contractor FTEs performing closely associated with inherently governmental functions. In contrast, the Department of Defense Education Activity indicated its review was conducted by comparing data from the inventory with information gathered through their contract writing system database. The component provided no additional information on its methodology. Based on the reported methodologies, we could not determine whether several components took into consideration the way an activity is performed or administered as part of their inventory reviews, which was required by the December 2011 guidance. For example, U.S. Special Forces Command indicated in its certification letter that all the contracts in its inventory were reviewed before award by the Special Operations Command Requirements Evaluation Board. The command did not indicate whether reviews were conducted after contracts were awarded. While the Office of Federal Procurement Policy directs agencies to confirm before award that the services to be procured do not include inherently governmental work, it also directs agencies to review on an ongoing basis the functions performed by contractors to ensure that the work being performed is appropriate. It was unclear based on our analysis of the certification letters, however, whether U.S. Special Forces Command, as well as several other components, took into consideration the way a contract is performed or administered as part of their inventory reviews. Inventory Review Results: All 29 components included a discussion of inherently governmental functions and unauthorized personal services in their letters. However, 4 of the 29 components did not discuss whether contractors were performing functions closely associated with inherently governmental functions, and 20 of the 29 components did not discuss contracted services that are exempt from private sector performance. Therefore, we could not determine if these components considered these types of activities when conducting their inventory reviews or whether no instances were found. Two components—the Army and Air Force—identified contractors performing inherently government functions or unauthorized personal services. The other 27 components indicated that they did not have contractors performing any of these activities. Table 2 summarizes the number of contractor FTEs the Army and Air Force identified. The Army, in its certification letter, noted that it planned to use term or temporary employees and/or military special duty personnel while awaiting insourcing approval of functions at risk of inherently governmental performance or otherwise lacking statutory authority. In January 2013, however, the Secretary of the Army froze civilian hiring, terminated temporary employees and prohibited extensions of term appointment without a specific exception to mission critical activities. In subsequent discussions with Army officials, we found that the Army, as of April 2013, had not developed a plan to address all instances in which contractors were performing inherently governmental functions or providing unauthorized personal services. Similarly, in follow-up discussions with Air Force officials, they told us that they are still discussing resolution of the instances identified with their manpower and personnel communities, as well as the affected major commands. Twelve of the 29 components identified contractors performing closely associated with inherently government functions (see table 3), 13 components noted that they did not have contractors performing these functions, and 4 did not discuss this element in their certification letter. Since DOD’s guidance did not specify how components were to report the number of instances identified, components discussed the instances they found in a variety of ways. For example, the Army and the Air Force were able to provide us with the number of contractor FTEs performing closely associated with inherently governmental functions, while the Navy identified the number of contracts and the Defense Logistics Agency identified the percent of contracts that included this type of activity. As a result, it is difficult to determine how many contractors are performing closely associated with inherently governmental functions. Further, our prior work has found that DOD contracts for significant amounts of professional, administrative and management support services, a significant portion of which were services that closely supported inherently governmental functions. Based on our prior work, it is not clear that DOD components accurately identified the extent to which their contractors are performing such functions during their inventory reviews. GAO, Defense Acquisitions: Further Actions Needed to Address Weaknesses in DOD’s Management of Professional and Management Support Contracts, GAO-10-39 (Washington, D.C.: Nov. 20, 2009). Closely associated with inherently governmental functions 44,541contractor FTEs The Navy did not identify the number of contractor FTEs, but noted they have 25 contracts that contained these functions. The agency did not identify the number of contractor FTEs in current contracts, but noted they have contracts that contained these functions. The agency did not identify the number of contractor FTEs, but noted that 4.5 percent of their sample of more than 50 percent of contract actions contained these functions. The agency did not identify the number of contractor FTEs, but noted that they had contractors performing these functions. The components did not identify the number of contractor FTEs, but reported that 24 out of 950 contracts consolidated from the three components had contractors performing these functions. The agency did not identify the number of contractor FTEs, but noted that several contracts contained these functions. The commands did not identify the number of contractor FTEs, but noted that “some requirements” contained these functions. The 12 components’ certification letters varied in the level of detail provided regarding the form of government control and oversight of contractors performing closely associated with inherently governmental functions. For example, the Defense Logistics Agency noted that it limits contractors’ exercise of discretion, assigns sufficient government employees to oversee the work, and identifies contractors and their products to ensure they are not being confused with those of government employees. In contrast, the Defense Advanced Research Projects Agency, stated that it awards and administers contracts in compliance with all applicable procedures, but did not provide further detail. Finally, 9 of the 29 components discussed contracted services that are exempt from private sector performance. None of these components reported having services exempt from private sector performance. Annual program reviews and budget processes: Fifteen of the 29 components that submitted review certification letters reported that they had used the information from their inventory reviews for annual program reviews or budget processes. For example, the Defense Contract Management Agency noted that it uses a review board to analyze service contracts on a monthly basis to look at requirements, follow-on contracts, and exercise of contract options proposed in the near future. In addition, it is currently assigning priorities and targeting reduction and conversions from contractors to government positions. These changes in priorities or workforce realignment would entail a change where funds are requested in budget justification materials. In another example, the U.S. Special Operations Command noted that it uses a requirements approval system to evaluate requirements, eliminate redundancies, and identify activities to be insourced. In addition, the Army has indicated that their inventory and inventory review were used to inform total workforce management reviews, including planned efforts to implement spending reductions for services that are closely associated with inherently governmental functions, and its fiscal year 2014 budget submission. None of the components, however, provided details on specific budgetary actions they took. DOD issued revised guidance applicable to the components’ fiscal year 2012 inventories in February 2013. DOD components are expected to review 80 percent of their inventories and respond to the same six elements as they were required to do in fiscal year 2011, but the components will also be required to provide additional information on the funds and the number of contractor FTEs associated with the following functions: unauthorized personal services lacking statutory authority, authorized personal services, and commercial functions. inherently governmental functions, closely associated with inherently governmental functions, critical functions, In addition, components are to provide an explanation of the degree to which the functions are part of overseas contingency operations, or reimbursable functions not currently in the component’s budget estimate for contracted services. Further, components are to report on the actions taken with respect to the functions described above, including whether the contract where these functions reside is continuing or modified, or whether the function was insourced or divested. Since fiscal year 2002, Congress has directed DOD to increase visibility into the purchase of services by the department, in part through the establishment of a data collection system that would allow it to identify each activity being performed by contractors and make informed workforce mix and budgetary decisions. With the exception of the Army, DOD’s overall progress to date can be characterized as a series of incremental, ad hoc steps, often taken in response to congressional direction. Over the past 18 months, DOD has been able to reach internal agreement on a way forward to collect contractor manpower data directly from contractors and has taken certain tangible steps toward this goal, such as by requiring components to begin modifying more than 270,000 contracts and task orders and to require new contracts to include provisions to require contractors to report direct labor hours, the types of functions being performed, and other information into interim CMRA systems. Nevertheless, it will be at least another year before DOD may have a department-wide eCMRA system in place to collect inventory data, such as manpower data directly from contractors and 2 more years, at the earliest, until it may have all components in compliance with inventory reporting requirements. Further, there are a number of challenges and unresolved issues that require continued management attention. For example, while DOD indicates that it remains on track to have a departmentwide data collection system in place in fiscal year 2014, the working group DOD established in January 2013 is still working on key decisions related to security, funding, and other technological issues and has not developed an implementation plan with anticipated time frames and necessary resources to help ensure DOD remains on track to meet its goals, as we recommended in 2011. Similarly, DOD’s December 2011 guidance has helped ensure that most components are reviewing their inventories. DOD also believes that the certifications by components have provided it better insights into the processes used and results of the reviews. Our review, however, indicates that the certifications often did not address or provided only limited information on the six elements that were called for by DOD’s December 2011 guidance. Most significantly, the letters were inconsistent in describing the methodology used to identify and review the inventories, the actions taken or planned to be taken by the military services to address instances in which contractors were found to be performing inherently governmental functions or unauthorized personal services, or how these and other components were providing adequate government oversight of contractors who were performing work closely associated with inherently governmental functions. For example, the Army and Air Force identified instances where contractors were performing inherently governmental functions and unauthorized personal services, but did not report whether they fully resolved these issues. Further, based on our review of the certification letters, it is unclear the extent to which the differences in the approaches used to conduct the reviews contributed to the wide variation of instances identified with regard to contractors performing work that is closely associated with inherently governmental functions. For example, the Army identified over 44,000 contractor FTEs performing work closely associated with inherently government functions, while 13 components did not identify any instances where contractors were performing these functions. Having the ability to identify and report instances of contractors performing inherently governmental functions, unauthorized personal services, or those closely associated with inherently governmental functions is one of the key benefits that the inventory is to provide to DOD, as it allows DOD to ensure contractors are performing appropriate work and to decide on the appropriate course of action should the reviews find that not to be the case. However, that value is significantly reduced if decision-makers have no assurance on whether corrective action was taken. DOD’s February 2013 guidance that governs the fiscal year 2012 inventory review attempts to improve accountability of the funds allocated to certain high risk functions and obtain better insight into the resolution of instances where contractors are performing inherently governmental functions or unauthorized personal services. The results, however, hinge on the extent to which the components comply with the guidance. Based on this year’s results, whether the components do so is not a foregone conclusion. To ensure that the inventory of contracted services reviews provide greater context and value to DOD leadership, we recommend that the Secretary of Defense direct component heads to take the following two actions: Comply with DOD’s February 2013 guidance, by ensuring that all required inventory review data elements, including a comprehensive description of their inventory review methodology, are addressed in their certification letters; and Provide updated information in certification letters on how they resolved the instances of contractors performing inherently governmental functions or unauthorized personal services in prior inventory reviews. DOD provided us with written comments on a draft of this report. DOD agreed with one recommendation and partially concurred with one recommendation. DOD’s written response is reprinted in appendix II. DOD also provided technical comments, which were incorporated as appropriate. DOD concurred with our recommendation that to provide greater context and value to DOD leadership, DOD should direct component heads to comply with its February 2013 guidance and ensure that all required inventory review data elements, including a comprehensive description of their inventory review methodology, are addressed in their certification letters. DOD did not believe that it was necessary for the Secretary of Defense to provide additional guidance, but rather indicated that AT&L and P&R, which have lead responsibility for the inventory, will disseminate our report to the components with a reminder that each component must specifically address each item listed in the fiscal year 2012 inventory of contracted services guidance. While we appreciate DOD’s actions to address the recommendation, the fact that none of the components fully addressed each element contained in AT&L and P&R’s previous guidance underscores, in our view, the need for more direct involvement by the Secretary to ensure compliance. DOD partially concurred with our recommendation that component heads provide updated information in certification letters on how they resolved the instances of contractors performing inherently governmental functions or unauthorized personal services in prior inventory reviews. DOD stated that while it agreed with the intent to ensure complete information is provided in certification letters regarding how component heads resolved instances of contractors performing inherently governmental or unauthorized personal services, DOD believes that the focus should be on the current and future reviews of the inventory of contracted services, rather than a correction of prior inventory reviews. To do so, DOD stated that AT&L and P&R will ask each component to include in the fiscal year 2012 certification letters any updated information on how they resolved the instances of contractors performing inherently governmental functions or unauthorized personal services in prior inventory reviews. DOD added that the fiscal year 2013 inventory of contracted services guidance will be updated to include this requirement when it is published in February 2014. Subsequently, DOD stated that any instances of contractors performing inherently governmental functions or unauthorized personal services recorded in prior inventory reviews that persist will be included and documented in the fiscal year 2012 and future review processes. DOD said it will verify that the certification letters contain a complete and accurate description of all required data elements, including actions taken to resolve outstanding issues related to contractors performing inherently governmental functions and unauthorized personal services prior to closing the respective review process. We agree that such an approach, if successfully implemented, would meet the intent of our recommendation. We are sending copies of this report to the Secretary of Defense and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix III. Section 803(c) of the National Defense Authorization Act for Fiscal Year 2010 directs GAO to report for 3 years on the inventory of activities performed pursuant to contracts for services that are to be submitted by the Secretary of Defense for fiscal years 2009, 2010, and 2011, respectively. To satisfy the mandate for 2012, we assessed (1) the progress DOD has made in compiling the inventory of contracted services and the status of efforts to collect contractor manpower data, and (2) the extent to which the defense components complied with DOD’s December 2011 guidance for reporting on the review of the fiscal year 2011 inventories. In performing our work we obtained pertinent documents and interviewed cognizant officials from the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics (AT&L); Office of the Under Secretary of Defense for Personnel and Readiness (P&R); the Office of the Under Secretary of Defense (Comptroller); Office of Defense Procurement and Acquisition Policy; Deputy Chief Management Officer; the departments of the Army, Navy, and Air Force; and two DOD components–the Defense Logistics Agency (DLA) and Defense Information Systems Agency (DISA). To assess the progress DOD has made in compiling the inventory of contracted services and the status of efforts to collect contractor manpower data, we reviewed the December 2011 guidance issued by AT&L and P&R related to the inventory compilation processes. We analyzed 31 DOD components’ fiscal year 2011 inventory submissions and all memorandums accompanying the inventory submissions, to determine the methodologies and processes used when compiling the fiscal year 2011 inventories and calculating or estimating the number of contractor full time equivalents (FTE). We focused on the Army, Navy, Air Force, DLA, and DISA because they had among the largest service contract obligations and contractor FTEs in the fiscal year 2011 inventory. We include DOD’s estimate of overall obligations and contractor FTEs for fiscal year 2011 in this report. We did not independently assess the accuracy or reliability of the underlying data supporting the components’ inventories of contracted services. However, our previous work identified data limitations with DOD components using data from the Federal Procurement Data System-Next Generation (FPDS-NG) as the basis for their inventories. We discuss these limitations in the report, as appropriate. In addition, we assessed DOD’s progress in developing a common data system to collect and house contractor manpower data for the entire department. We reviewed guidance issued by AT&L and P&R on modifying new and existing contracts to require reporting of contractor manpower data, and discussed the implementation by the Air Force, Navy, and DOD components of an interim data system. We also interviewed officials from AT&L, P&R, and the Office of the Deputy Chief Management Officer, and the military services to obtain the status of efforts in developing and implementing a department-wide data system to collect and house contractor manpower information. To assess the extent to which DOD components followed DOD’s guidance on the review of their fiscal year 2011 inventory, we analyzed 29 inventory certification letters submitted to P&R as of April 2013. We assessed the letters to determine if components reported on the six elements in DOD’s guidance for the inventory review, including the selection criteria and methodologies used to conduct the inventory reviews, a listing of the results of their compliance with applicable Title 10 provisions, workforce issues identified, whether the workforce issues had been resolved, identification of contracted services that are exempt from private sector performance, and actions being taken or considered with regards to annual program reviews and budget processes. We also followed-up with appropriate Army and Air Force officials to determine how they resolved workforce issues identified in their fiscal year 2009 inventory reviews. We did not assess whether the reported data or guidance met legislative requirements for the inventory review. In addition, we also did not independently assess the reliability and accuracy of the review certification information. We conducted this performance audit from October 2012 to May 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Cheryl Andrew, Margaret A. Best, Laura Greifner, Katheryn S. Hubbell, Julia Kennon, John Krump, LeAnna Parkey, Guisseli Reyes-Turnell, and Wendy P. Smythe made key contributions to this report. | DOD is the government's largest purchaser of contractor-provided services. In fiscal year 2011, DOD reported $199 billion in obligations for service contracts, which include services as varied as medical services and intelligence support. In 2008, Congress required DOD to compile and review an annual inventory of its contracted services to include the number of contractors providing services to DOD and the functions these contractors were performing. The 2010 National Defense Authorization Act directed GAO to report for 3 years on these inventories. For this third report, GAO assessed (1) the progress DOD has made in compiling the fiscal year 2011 inventory of contracted services and efforts to collect contractor manpower data, and (2) the extent to which defense components complied with DOD's guidance for reporting on their inventory reviews. GAO reviewed relevant laws and guidance, analyzed inventory submissions from 31 components, reviewed component certification letters, and interviewed DOD acquisition and manpower officials. Over the past year and a half, the Department of Defense (DOD) has taken steps to implement its plan to collect contractor manpower data directly from contractors and to develop and implement a department-wide system, based on the Army's existing system, to collect and store these and other inventory data. DOD officials estimate that the data system will be available in fiscal year 2014, with DOD components reporting on most of their service contracts by fiscal year 2016. DOD, however, is still working on key decisions related to security, funding, and other technological issues and has not developed a plan of action with anticipated time frames and necessary resources to help ensure DOD remains on track to meet its goals. Making timely decisions and developing a plan of action with anticipated timeframes and necessary resources, as GAO has previously recommended, would facilitate DOD's stated intent of implementing a DOD-wide system to collect required inventory information. For the fiscal year 2011 inventory, DOD components generally used the same compilation processes used in the previous year. As such, with the exception of the Army, which already has an inventory data collection system, the remaining components relied primarily on the Federal Procurement Data System-Next Generation (FPDS-NG). GAO previously reported that FPDS-NG has several limitations, including the inability to identify more than one type of service in a contract or the number of contractor full-time equivalents (FTE), which limit its utility for purposes of compiling a complete and accurate inventory. Consistent with DOD's December 2011 guidance, 29 of the 31 components submitted letters certifying that they had conducted an inventory review as of April 2013. DOD officials stated that the requirement to submit certification letters represented a significant improvement over prior years' reviews, when DOD could not determine whether the required reviews were conducted. These officials also stated that the letters provided useful insights into the components' efforts. GAO's analysis, however, indicates that none of the components' certification letters discussed all six elements required by DOD's guidance. For example, GAO's analysis found that the letters generally provided only limited information on their review methodologies or the results of their review efforts. In addition, it is unclear based on the information provided in the certification letters the extent to which the differences in the methodologies components used to conduct the reviews contributed to the variation in the identification of contractors performing inherently governmental functions, unauthorized personal services, or closely associated with inherently governmental functions. For example, the Army, using its review process, identified over 44,000 contractor FTEs performing closely associated with inherently governmental functions, while the Air Force identified about 1,400 contractor FTEs and 13 components reported they had no contractors performing these functions. Further, the Army and the Air Force did not provide complete information on actions taken to resolve instances where they had identified contractors performing inherently governmental functions as part of their reviews, such as by transferring performance of these functions to DOD personnel or modifying the contract's statement of work. The ability to identify contractors performing these functions is valuable as it allows actions to be taken, but that value is significantly reduced if decision-makers have no assurance as to whether corrective actions were taken. GAO recommends that the Secretary of Defense direct component heads to discuss in their certification letters all required inventory review elements, as well as how instances where contractors are performing inherently governmental functions were resolved. DOD generally concurred with our recommendations, but indicated that the Secretarys involvement was not necessary. GAO believes it is, as discussed in the report. |
In 2000, significantly fewer managers at CMS—then known as the Health Care Financing Administration—reported using performance information for various management decisions, as compared to their counterparts in the rest of government. Between our 2000 and 2007 surveys, however, CMS showed one of the largest average increases in the percentage of managers who reported using performance information for certain decisions. This increase placed CMS in about the middle of our agency rankings, which were based on an index of 2007 survey results designed to reflect the extent to which managers at each agency reported using performance information. Our analysis of CMS survey results, management interviews, and agency policies, performance reports, and other relevant documents indicated that the adoption of key management practices contributed to this improvement. Our 2007 survey results showed that significantly more CMS managers agreed that their leadership is committed to achieving results, than they did in 2000 (see fig. 2). Nearly all of the CMS officials we interviewed credited the commitment of one or more agency leaders—such as the CMS Administrator or the Chief Operating Officer—for their increased use of performance information to achieve results. One way in which leaders can demonstrate their commitment is through frequent communication of established goals and progress made toward those goals. As an example, in an effort to reduce the incidence of pressure ulcers among nursing home residents, a Region IV manager described to us how regional leadership began to routinely share performance information about the pressure-ulcer problem with the many stakeholders involved with patient care including hospital and nursing-home personnel, patient advocates, emergency medical technicians, and others. CMS contracts with states to assess the quality of care provided by Medicare and Medicaid-participating facilities, such as nursing homes, and is therefore several steps removed from the delivery of health-care services to patients and the resulting health outcomes. According to CMS Region IV managers we interviewed, this indirect influence had been considered a limiting factor in CMS’ ability to affect outcomes among nursing-home patients. However, these same managers said that leadership commitment to getting stakeholders to the table and sharing performance information with them were critical factors in bringing about a reduction in the incidence of pressure ulcers. In that region, between fiscal years 2006 and 2008, this improvement translated into nearly 2,500 fewer long-stay nursing-home residents with pressure ulcers. Our survey results also indicated that between 2000 and 2007, a significantly greater percentage of CMS managers reported that they were held accountable for program results (see fig. 3). In 2006, as part of a change throughout HHS, the agency adopted a new performance-management system that links organizational and program goals with individual accountability for program results. Top CMS headquarters officials said that the new system had made individual accountability for program results more explicit. They described how agency goals and objectives were embedded in the Administrator’s performance agreement and cascaded down through the management hierarchy, so that each level of management understood their accountability for achieving the broad department and agency-level goals. To illustrate, broad goals for preventive healthcare cascade from HHS through a CMS director responsible for increasing early detection of breast cancer among Medicare beneficiaries, to a CMS Health Insurance Specialist responsible for communications to raise awareness of the importance of mammograms and other preventive measures. Our survey results show that between 2000 and 2007, there was a significant decline in the percentage of CMS managers who reported that difficulty developing meaningful measures was a hindrance to using performance information (see fig. 4). According to CMS officials, to ensure that performance information was useful to managers, they limited the number of measures for GPRA reporting purposes to the 31 that represented the agency’s priorities. This official noted that it would be unmanageable to measure and report on every aspect of their programs and processes. They ultimately settled on a set of performance goals that helped managers and staff identify performance gaps and opportunities to improve performance to close the gaps. Our survey results and interviews with several CMS officials indicate that the agency also took steps to develop their staff’s capacity to use performance information, such as investing in improved data systems and offering increased training opportunities on a range of topics related to performance planning and management. Between 2000 and 2007, there was a significant positive increase on all six survey questions related to managers’ access to training over the past three years on the use of performance information for various activities (see fig. 5). According to one official we spoke with, increasing her staff’s skills in conducting analyses of performance information and presenting findings was a gradual process that required training, coaching, and guidance. Just as the adoption of key management practices can facilitate greater use of information and a greater focus on results, the absence of these practices can hinder widespread use. Fewer managers at FEMA and Interior reported making extensive use of performance information for decision making compared to managers at other agencies. Survey results, interviews with senior level-officials and regional and program managers, and a review of policies and other documents related to performance planning and management at both agencies showed that inconsistent use of these practices contributed to this condition. Our 2007 survey results indicated that, compared to the rest of government, a smaller percentage of FEMA managers agreed their top leadership demonstrated a strong commitment to using performance information to guide decision making (see fig. 6). Our interviews with officials at FEMA were consistent with these survey results, indicating that management commitment was demonstrated inconsistently across the program directorates and regions we reviewed. Leaders and managers we spoke to throughout the management hierarchy were clearly committed to carrying out FEMA’s mission. The level of commitment to using performance information for decision making, however, appeared to vary among those we interviewed. For example, in the Disaster Assistance Directorate, one headquarters official told us that he does not need performance targets to help him determine whether his directorate is accomplishing its mission, relying instead on verbal communications with the leadership and with FEMA’s regions, joint field offices, and members of Congress to identify issues to be addressed and areas that are running well. Another headquarters official within the Disaster Assistance Directorate’s Public Assistance program said he does not receive formal performance reports from regional program managers, nor are any performance reports required of him by his supervisors; rather, he said that he spoke to the regions on an ad hoc basis as performance problems arose. These officials expressed reluctance toward holding their staff accountable for meeting performance goals due to external factors, such as the unpredictability of disasters beyond their control. Further, they expressed uncertainty as to how they could use performance information in the face of uncontrollable external factors. As noted below, however, other managers in FEMA have found ways to take unpredictable occurrences into account as they monitor their progress in achieving performance goals. FEMA faces other hurdles, including the lack of a performance- management system requiring managers to align agency goals with individual performance objectives, which makes it challenging for managers to hold individuals accountable for achieving results. The agency also lacks adequate information systems for ensuring that performance information can be easily collected, communicated, and analyzed. For example, in order to gather performance information across directorates, one official reported that it was necessary to write programs to generate specific reports for each of the systems and then manually integrate the information, making it difficult to produce repeatable and verifiable reports. Further, according to several officials we interviewed, there was a limited number of staff with the analytic skills necessary to work with performance metrics. As with FEMA, at Interior we observed that leaders and managers at all levels conveyed a strong commitment to accomplishing the agency’s mission. Interior’s survey results were similar to FEMA’s results on items related to managers’ perceptions of their leadership’s commitment to using performance information. Interior’s 2007 results were also lower than those in the rest of government (see fig. 7). According to officials we interviewed, leaders at Interior and NPS did not effectively communicate to their staff how, if at all, they used performance information to identify performance gaps and develop strategies to better achieve results. Several NPS managers referred to the performance reporting process as “feeding the beast,” because they receive little or no communication from either Interior or NPS headquarters in response to the information they are required to report, leading them to assume that no one with authority reviews or acts on this information. Furthermore, some bureau-level managers at NPS and Reclamation said the performance measures they are required to report on were not always useful for their decision making, either because there were too many or because they were not credible. We have previously reported that to be useful and meaningful to managers and staff across an agency, performance measures should be limited at each organizational level to the vital few that provide critical insight into the agency’s core mission and operations. However, in the seven years since the inception of the former administration’s Performance Assessment Rating Tool (PART) initiative, Interior has expanded its performance reporting to include 440 PART program measures, in addition to the approximately 200 strategic performance measures used to track progress against its strategic and annual plans, as required by GRPA. A senior headquarters official at Interior said that the number of measures makes it difficult for senior leaders and managers to focus on priorities and easily identify performance gaps among the different program areas. At NPS alone, managers were required to report on 122 performance measures related to GPRA and PART. Managers at both NPS and Reclamation also described performance information that lacked credibility because the measures either did not accurately define comparable elements or did not take into account different standards across bureaus or units. For example, several NPS managers noted that one of the measures on which they report, “percent of historic structures in good condition,” does not differentiate between a large, culturally significant structure such as the Washington Monument and a smaller, less significant structure such as a group of headstones. Consequently, a manager could achieve a higher percentage by concentrating on improving the conditions of numerous less significant properties. Poorly integrated performance and management information systems further hindered NPS and Reclamation managers’ efforts to use performance information to inform their decision making. For example, according to some Reclamation managers we interviewed, there is no one centralized database to which a Reclamation executive can go to find out how the bureau is doing on all of Reclamation’s required performance goals. The lack of linkage among the different Reclamation systems required managers to enter the same data multiple times, which some managers said is a burden. Despite the challenges facing FEMA and Interior, we also observed various initiatives and program areas within the agencies where leaders were committed to increasing the use of performance information; and were demonstrating that commitment by communicating the importance of using data to identify and solve problems, involving their managers in efforts to develop useful measures, and connecting individual performance with organizational results. Within FEMA, Mitigation Directorate officials we interviewed reported that they had begun to use performance information to plan for and respond to factors outside of their control, a change that they attributed in large part to the former Mitigation Administrator’s commitment to performance and accountability. For example, storms and other natural events can disrupt the Mitigation Directorate’s production work related to floodplain maps modernization, which is a key step in ensuring that flood-prone communities have the most reliable and current flood data available. To plan for possible disruptions, Mitigation Directorate officials said they reviewed performance information on progress toward map modernization goals on a monthly basis with their external stakeholders, including state and local governments and insurance companies and FEMA’s regional management, which sent a clear signal that Mitigation’s leadership was paying attention to outcomes. According to these officials, this review helped them to determine in advance if they were at risk of missing performance targets and to identify corrective actions or contingency plans in order to get back on track toward achieving their goals. Moreover, they said, they were able to meet or exceed their performance target of 93 percent of communities adopting new floodplain maps, in part, as a result of their frequent communication and review of performance information. Mitigation Directorate officials said that developing measures and holding staff and contractors accountable for their performance was not an easy transformation. They said that one key to this culture change was for the leadership to strike an appropriate balance between holding managers accountable for agency goals and building trust among managers and staff that performance information would be used as an improvement tool, rather than as a punitive mechanism. Finally, Mitigation Directorate officials said that managers and staff became more supportive of their leadership’s efforts to use performance information in their decision making once they began to see that measuring performance could help them to improve results. At Interior and NPS, officials were aware that managers continue to struggle with the high volume of performance information they are required to collect, and have initiated various strategies designed to improve the usefulness of performance information without adding to the existing data-collection and reporting process. For example, NPS’ Core Operations Analysis is a park-level funding and staffing planning process, recently adopted by several regions, that is intended to improve the efficiency of park operations and ensure that a park’s resource-allocation decisions are linked to its core mission goals. Regional-level managers who engaged in the Core Operations Analysis said it was useful in establishing goals based on the park’s priorities, monitoring progress toward achieving those goals, and holding park superintendents accountable for meeting established goals. Our report contains recommendations to the Secretary of the Department of Homeland Security (DHS) for FEMA and the Secretary of the Interior, designed to build upon the positive practices we identified within these agencies. We recommended that FEMA augment its analytic capacity to collect and analyze performance information and strengthen linkages among agency, program, and individual performance. We also recommended that Interior, NPS, and Reclamation review the usefulness of their performance measures in conjunction with OMB and refine or discontinue performance measures that are not useful for decision making. Finally, to FEMA, Interior, and NPS, we made recommendations intended to improve the visibility of agency leadership’s commitment to using performance information in decision making. Both DHS and Interior generally agreed with these recommendations. As we have noted in the past, the President and Congress both have unique and critical roles to play in demonstrating their commitment to improving federal agency performance results. Both OMB and Congress can send strong messages to agencies that results matter by articulating expectations for individual agency performance and following up to ensure that performance goals are achieved. At the same time, they also need to address performance problems in the areas of government that require the concerted efforts of multiple agencies and programs. Increasingly, many of the outcomes we look for—such as prevention of terrorist attacks, reduction in incidence of infectious diseases, or improved response to natural disasters—go beyond the scope of any one single agency. In these cases, agencies must work closely together to achieve desired results. The President can send a signal to federal managers that using performance information is critical for achieving results and maximizing the return on federal funds invested by selecting and focusing his attention on achieving certain critical goals, such as creating or retaining jobs through investments under the American Recovery and Reinvestment Act of 2009. As a first step, OMB has begun to issue guidance to agencies on identifying a limited number of high-priority performance goals, with the explicit message that performance planning is a key element of the President’s agenda to build a high-performing government. With this recent guidance, OMB has also put agencies on notice that the executive- branch leadership is paying attention to their performance, by establishing regular reviews of the progress agencies are making to improve results in these high-priority areas. As the primary focal point for overall management in the federal government, OMB can support agency efforts to use performance information by encouraging agencies to invest in training, identifying and disseminating leading practices among agency managers, and assisting agencies in adopting these practices where appropriate. As we previously reported, our survey results showed a positive relationship between managers who reported receiving training and development on setting program performance goals and those who report using performance information when setting or revising performance goals. However, as we testified in July 2008, while our survey found a significant increase in training since 1997, only about half of our survey respondents in 2007 reported receiving any training that would assist in analyzing and making use of performance information. We previously recommended that OMB ensure that agencies are making adequate investments in training on performance planning and measurement, with a particular emphasis on how to use performance information to improve program performance. Although the agency has not yet implemented this recommendation, an official who oversees OMB’s management initiatives said that OMB has recently launched a collaborative Wiki page for federal agencies. According to this official, the Wiki is intended to provide an on-line forum for federal managers to share lessons learned and leading practices for using performance information to drive decision making. In addition to providing support to help improve agency-level performance, OMB is uniquely positioned to facilitate collaborative, governmentwide performance toward crosscutting goals. As noted above, there are numerous performance challenges, ranging from combating terrorism to preventing the spread of infectious diseases, which transcend organization lines and require the concerted efforts of multiple agencies and programs. We have previously reported that GPRA could provide OMB, agencies, and Congress with a structured framework for addressing crosscutting program efforts. OMB, for example, could use the provision of GPRA that calls for OMB to develop an annual governmentwide performance plan to integrate expected agency-level performance. Such a plan could help the executive branch and Congress address critical federal performance and management issues such as conflicting agency missions, jurisdiction issues, and incompatible procedures, data, and processes. As we pointed out in our July 2008 testimony, this provision has not been implemented fully. In addition to the annual performance plan, a governmentwide strategic plan could identify long-term goals and strategies to address issues that cut across federal agencies. To that end, we have also recommended that Congress consider amending GPRA to require the President to develop a governmentwide strategic plan. Such a plan—supported by a set of key national outcome-based indicators of where the nation stands on a range of economic, environmental, safety and security, social, and cultural issues—could offer a cohesive perspective on the long-term goals of the federal government and provide a much-needed basis for fully integrating, rather than merely coordinating, a wide array of federal activities. By routinely incorporating agency performance issues into its deliberations and oversight, Congress can send an unmistakable message to agencies that they are expected to manage for results. As we have noted in our earlier work, however, Congress needs to be actively involved in early conversations about what to measure and how to present this information. We previously reported that the PART process used by the prior administration did not systematically incorporate a congressional perspective and promote a dialogue between Congress and the President. As a result, most congressional committee staff we spoke to did not use the PART results to inform their deliberations. Although the Obama Administration intends to adopt a new performance improvement and analysis framework, any new framework should include a mechanism to consult with members of Congress and their staffs about what they consider to be the most important performance issues and program areas warranting review. Engaging Congress early in the process could help target performance improvement efforts toward those areas most likely to be on the agenda of Congress, thereby increasing the likelihood that they will use performance information in their oversight and deliberations. Additionally, as we noted in our July 2008 testimony, Congress could consider whether a more structured oversight mechanism would be helpful in bringing about a more coordinated congressional perspective on governmentwide performance issues. Just as the executive branch needs to better address programs and challenges that span multiple departments and agencies, Congress might find it useful to develop structures and processes that provide a coordinated approach to overseeing agencies where jurisdiction crosses congressional committees. We have previously suggested that one possible approach could involve developing a congressional performance resolution identifying the key oversight and performance goals that Congress wishes to set for its own committees and for the government as a whole. Such a resolution could be developed by modifying the annual congressional budget resolution, which is already organized by budget function. This may involve collecting the input of authorizing and appropriations committees on priority performance issues for programs under their jurisdiction and working with crosscutting committees such as the Senate Committee on Homeland Security and Governmental Affairs, the House Committee on Oversight and Government Reform, and the House Committee on Rules. In conclusion, while federal agencies have become better positioned to manage for results, there is still much to be done to shift the focus of federal managers from merely measuring agency performance to actively managing performance to improve results. Our work indicates that widespread adoption of the key management practices we have identified is a critical first step. At the same time, the President and Congress each have unique and critical roles to play in building a high-performing, results-oriented, and collaborative culture across the government. Beyond this, the creation of a long-term governmentwide strategic plan, informed by a set of key national indicators, and an annual governmentwide performance plan could provide important tools for integrating efforts across agencies to achieve results on the challenging issues that increasingly face our nation in the 21st century. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions you or other members of the subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-6543 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals who made key contributions to this testimony were Elizabeth Curda (Assistant Director), Jessica Nierenberg, Laura Miller Craig, Kate Hudson Walker, Karin Fangman, Melanie Papasian, A.J. Stephens, and William Trancucci. National Aeronautics and Space Administration Department of Housing and Urban Development Department of the Treasury (excluding Internal Revenue Service) Centers for Medicare & Medicaid Services United States Agency for International Development Department of Agriculture (excluding Forest Service) Department of Homeland Security (excluding Federal Emergency Management Agency) Department of Transportation (excluding Federal Aviation Administration) Department of Health and Human Services (excluding Centers for Medicare & Medicaid Services) This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since 1997, periodic GAO surveys indicate that overall, federal managers have more performance information available, but have not made greater use of this information for decision making. To understand the barriers and opportunities for more widespread use, GAO was asked to (1) examine key management practices in an agency in which managers' reported use of performance information has improved; (2) look at agencies with relatively low use of performance information and the factors that contribute to this condition; and (3) review the role the President and Congress can play in promoting a results-oriented and collaborative culture in the federal government. This testimony is primarily based on GAO's report, Results-Oriented Management: Strengthening Key Practices at FEMA and Interior Could Promote Greater Use of Performance Information, which is being released today. In this report, GAO made recommendations to the Departments of Homeland Security (DHS) and the Interior for improvements to key management practices to promote greater use of performance information at FEMA, the National Park Service, Bureau of Reclamation, as well as at Interior. Both DHS and Interior generally agreed with these recommendations. The testimony also draws from GAO's extensive prior work on the use of performance information and results-oriented management. GAO's prior work identified key management practices that can promote the use of performance information for decision making to improve results, including: demonstrating leadership commitment; aligning agency, program, and individual performance goals; improving the usefulness of performance information; building analytic capacity; and communicating performance information frequently and effectively. The experience of the Centers for Medicare & Medicaid Services (CMS) illustrates how strengthening these practices can help an agency increase its use of performance information. According to GAO's most recent 2007 survey of federal managers, the percentage of CMS managers reporting use of performance information for various management decisions increased by nearly 21 percentage points since 2000--one of the largest improvements among the agencies surveyed. CMS officials attributed this positive change to a number of the key practices, such as the agency's leaders communicating their commitment to using performance information to drive decision making. Conversely, the experiences of the Department of the Interior (Interior) and the Federal Emergency Management Agency (FEMA) within the Department of Homeland Security indicated that the absence of such commitment can discourage managers and their staff from using performance information. According to GAO's 2007 survey, Interior and FEMA ranked 27 and 28, respectively, out of 29 agencies in their reported use of performance information for various management functions. Based on further survey data analysis, reviews of planning, policy, and performance documents, and management interviews, GAO found that inconsistent application of key practices at FEMA and Interior--such as routine communication of how performance information influences decision making--contributed to their relatively low survey scores. While both FEMA and Interior have taken some promising steps to make their performance information both useful and used, these initiatives have thus far been limited. The President and Congress also have unique and critical roles to play by driving improved federal agency performance. By focusing attention on certain high-level goals and tracking agency performance, the President and the Office of Management and Budget (OMB) can send a message that using performance information is critical for achieving results and maximizing the return on federal funds invested. Through its oversight, Congress can also signal to agencies that results matter by articulating performance expectations for areas of concern and following up to ensure that performance goals are achieved. The President and Congress can also play a role in improving government performance in areas that require the concerted efforts of multiple agencies and programs to address, such as preparing for and responding to a pandemic influenza. A governmentwide strategic plan could support collaborative efforts by identifying long-term goals and the strategies needed to address crosscutting issues. |
DOE’s LGP was originally designed to address a fundamental impediment for investors and lenders that stems from the risks of innovative and advanced energy projects, including technology risk—the risk that the new technology will not perform as expected—and execution risk—the risk that the borrower or project will not perform as expected. Companies can face obstacles in securing enough affordable financing from lenders to survive the gap between developing innovative technologies and commercializing them. Because the risks that lenders must assume to support new technologies can put private financing out of reach, companies may not be able to commercialize innovative technologies without the federal government’s financial support. The LGP was established in Title XVII of the Energy Policy Act of 2005 to encourage early commercial use of new or significantly improved technologies in energy projects. The act—specifically section 1703— originally authorized DOE to guarantee loans for energy projects that (1) use new or significantly improved technologies as compared with commercial technologies already in service in the United States and (2) avoid, reduce, or sequester emissions of air pollutants or man-made greenhouse gases. In February 2009, Congress expanded the scope of the LGP in the American Recovery and Reinvestment Act (Recovery Act) by adding section 1705 to the Energy Policy Act, which extended the program and provided funding to include projects that use commercial energy technology that employs renewable energy systems, electric power transmission systems, or leading-edge biofuels that meet certain criteria. As of March 2014, DOE had made 31 loan guarantees for approximately $15.7 billion under section 1705, which expired on September 30, 2011, and 2 loan guarantees for approximately $6.2 billion under section 1703. These guarantees have been for biomass, geothermal, nuclear, solar, and wind generation; energy storage; solar manufacturing; and electricity transmission projects (see app. III). Two borrowers withdrew in 2012 before starting to draw funds from their loans. Additionally, in September and October 2013, DOE deobligated 2 loan guarantees because they did not seem likely to meet the loan conditions required to begin drawing on their loans. Three other loan guarantee borrowers have defaulted and filed for bankruptcy—one borrower and its loan guarantee have been restructured, and the guarantee remains active; the other two borrowers are in liquidation proceedings. In addition, DOE has conditional commitments,$3.8 billion in section 1703 loan guarantees for two nuclear projects. In December 2013, DOE announced a new solicitation for applications for up to $8 billion in loan guarantees for advanced fossil energy projects. issued in 2010, for approximately The ATVM loan program was established in December 2007 by the Energy Independence and Security Act (EISA), and the fiscal year 2009 Continuing Resolution appropriated funding for the program. DOE’s five loans for $8.4 billion under this program went to both established automakers and start-up manufacturers. These loans are for the manufacture of fuel-saving enhancements of conventional vehicle technology, plug-in hybrids, and all-electric vehicles. In May 2013, one borrower paid back its loan. Two ATVM borrowers have defaulted on their loans. In 2013, DOE sold the defaulted loan notes in auction proceedings. In 2010, DOE consolidated the previously separate LGP and ATVM program under the Loan Programs Office. Monitoring for both LGP and ATVM is conducted out of one division: the Portfolio Management Division, with support coming from several other divisions throughout the Loan Programs Office. DOE has not fully developed or consistently adhered to loan monitoring policies for its loan programs. In particular, DOE has established policies for most loan monitoring activities, but policies for some of these activities remain incomplete or outdated. Further, in some cases we examined, DOE generally adhered to its loan monitoring policies but, in others, DOE adhered to those policies inconsistently or not at all because the Loan Programs Office was still developing its staffing, management and reporting software, and policies. DOE has established policies for most loan monitoring activities, but policies for some of these activities remain incomplete or outdated. More specifically, DOE has established policies for loan monitoring activities including disbursing funds, monitoring and reporting on credit risk, and managing troubled loans. (For more details about DOE loan monitoring policies and activities, see app. II.) However, loan monitoring policies for evaluating and mitigating program-wide risk remain incomplete or outdated, and several dates DOE set for completing or updating these policies passed during the course of our work. Evaluating and mitigating program-wide risk is generally the responsibility of the Risk Management Division within DOE’s Loan Programs Office. This division was established in February 2012 and has been operating since its inception under incomplete or outdated policies. For example, the policies do not address how the new structure of the Risk Management Division fits into existing policies, thus not providing clear guidance on the organizational roles of the division. DOE officials told us that policy revisions were delayed in part because the Loan Programs Office did not have a Director of Risk Management until November 2012 and that a planned revision was put on hold to await the arrival of a new Executive Director in May 2013. Additionally, the Risk Management Division had not staffed 11 of its 16 planned positions until late 2013, when it staffed 6 of 11 vacancies. As highlighted by an independent White House review of DOE’s loan programs, as well as our discussions with private lenders, a risk management division is essential for mitigating risk. Similarly, Office of Management and Budget (OMB) guidance specifies that credit programs should have robust management and oversight frameworks for monitoring the programs’ progress toward achieving policy goals within acceptable risk thresholds, and taking action where appropriate to increase efficiency and effectiveness. It is difficult to determine whether DOE is adequately managing risk if policies against which to compare its actions are outdated or incomplete. Also, without fully staffing key monitoring positions, the Risk Management Division is limited in its ability to revise and complete policies, as well as perform its other monitoring responsibilities. In some cases we examined, DOE generally adhered to its loan monitoring policies but, in other cases, DOE adhered to its monitoring policies inconsistently or not at all because DOE was still developing the Loan Programs Office’s organizational structure, including staffing, management and reporting software, and implementing procedures for policies. As a consequence, DOE was making loans and disbursing funds from 2009 through 2013 without a fully developed loan monitoring function. DOE generally adhered to its monitoring policies for activities such as disbursing funds and reviewing borrower requests for changes to loan agreement provisions. For example, for the 10 loans in our sample, we found that, in disbursing funds, DOE generally documented its analysis of the financial health of the project and recorded supervisory approvals, as required in its policy. Similarly, we found that in nearly all of the 30 requests for amendments and waivers to loan agreements for the 10 loans in our sample, DOE officials properly recorded their review of the requested changes. In some other cases, DOE inconsistently adhered to its monitoring policies. For example, in regard to monitoring and reporting on credit risk, DOE was inconsistent in its preparation of credit reports which, according to DOE’s policy manuals, provide early warning signs of potential credit problems and can guide project and loan restructuring efforts should the need arise. In total, DOE was missing 24 of 88 periodic credit reports due through May 2013 across the 10 sample loans. Twenty of the missing reports were not completed because DOE did not begin producing periodic credit reports until August 2011. DOE officials told us that such reports were not produced before then because the Portfolio Management Division had not filled the staff positions needed for producing these reports, and its management and reporting software was under development. As a result, DOE disbursed more than $4.7 billion for the 10 loans in our sample before it began producing periodic credit reports as required in its policy manuals. According to DOE officials, although DOE was not producing credit reports during this time, DOE staff were taking other measures to monitor the loans, such as keeping in regular contact with the borrowers. In addition, after DOE began producing credit reports, DOE officials inconsistently recorded credit risk ratings on multiple credit reports. For example, of the 64 reports we reviewed as part of our sample, 11 had one or more credit risk rating fields left blank, and other credit rating fields contained errors. According to DOE officials, the reasons for the blank and incorrect fields included human error and a system design error that occurs in its management and reporting software. Further, there was a wide range between when credit reports were completed and when they were reviewed; more specifically, the time it took to review reports completed on a quarterly basis ranged from as little as a week to over 3 months. DOE officials told us that the reporting and review period inconsistencies were a result of inadequate staffing and incomplete implementing procedures that did not provide clear guidance on reporting dates. Also, some reports were submitted and approved outside of DOE’s management and reporting software, for which the system design was still being worked out, and training was being provided. DOE’s policy manuals specify that one purpose of these credit reports is to serve as an information source for inquiries by government oversight authorities seeking to understand the loans’ structures and decisions. Incomplete or inconsistent credit reporting can make it difficult for these authorities to understand and assess the status of the loans and determine if corrective actions are needed. According to DOE officials, as of February 2014, its staffing levels and its management and reporting software were sufficient to support full and timely credit reporting. After we found inconsistencies in DOE’s credit reports, DOE established a draft implementing procedure to guide the development of future credit reports that clarified reporting dates and preparation periods for new and existing staff in June 2013. Furthermore, DOE officials stated, in January 2014, that the department has taken steps to address human error in the credit risk rating fields by requiring that the fields representing previous credit ratings are populated automatically. DOE officials also stated that they are addressing the system design issue in the next generation of its management and reporting software, planned for release in late fall 2014. In another example, DOE inconsistently adhered to policies for managing troubled loans. DOE’s policy manuals require that DOE prepare and approve plans for handling troubled loans to borrowers who are in danger of defaulting on their loan repayments. Once it becomes clear that a loan is in danger of default, DOE policy calls for the preparation, approval, and implementation of a workout action plan, which identifies potential problems and lays out decisive remedial actions to help minimize potential losses. However, for two troubled loans in our sample, DOE officials told us they had not prepared a formal workout action plan in a single document but instead specified problems and remedial actions in many documents over a period of time. For example, in one case, in 2011, where DOE officials were aware for at least 10 months that the borrower would likely default on its payments, DOE provided us with about 20 such documents including analyses of collateral, draft memoranda, and slideshow presentations, which showed that DOE had taken or considered some of the options described by its policy. However, these documents did not conform to DOE’s policy for these plans, particularly its policy that DOE prepare a workout plan document and seek formal approval from its management. DOE officials told us that “operational matters had evolved beyond the steps outlined in their policy manuals.” DOE officials noted that they were revising the manuals to better comport with best practices in the finance industry and that DOE has been operating under draft implementing procedures since June 2012. These officials also noted that DOE’s 2009 and 2011 policy manuals were inadequate and were completed without the benefit of experts in the field of workout plans due to limited staffing in the Portfolio Management Division at the time. Officials noted that managers with such expertise are now on staff in the division, but the branch within the division that is tasked with managing troubled loans, including the development and implementation of workout action plans, had not staffed four of five positions as of February 2014. DOE officials told us that, given the availability of third-party financial advisors and the limited number of assets that fall within that category, they may not need to fill all of the positions. However, inconsistent adherence to policies and incomplete staffing limit DOE’s assurance that it has been effectively managing troubled loans during a period when there have been five defaults and bankruptcies among DOE loan program borrowers or that it can effectively manage such loans in the future. Further, DOE did not adhere to some existing policies for program-wide evaluation and mitigation of portfolio-wide risk, in particular policies for evaluating the effectiveness of its loan monitoring. DOE’s 2011 policy manual states that certain functions are critical to management’s ability to assess the adequacy and quality of the agency’s monitoring. The manual further states that failure to maintain these functions is an unsound practice that could expose DOE to loss or criticism. These functions— which are referred to as credit review, compliance, and reporting functions—include internal assessment of documentation, portfolio-wide reporting on risks, and evaluation of the effectiveness of DOE’s loan monitoring. The Loan Programs Office’s Portfolio Management Division has conducted some internal assessments of the quality of DOE documentation and, in May 2013, started some portfolio-wide reporting on the overall risk posed by DOE’s loan obligations. However, DOE officials told us that the division has not evaluated the effectiveness of the agency’s loan monitoring efforts or produced the required reports. DOE officials told us that these responsibilities have been transferred to the Risk Management Division, which, as noted earlier, was operating under incomplete or outdated policies and had staff vacancies. Without conducting these evaluations, DOE management cannot assess the adequacy of its monitoring efforts and thus be reasonably assured that it is effectively managing risks associated with its loan programs. DOE’s loan programs began making loans and guarantees in 2009, and by March 2014 DOE had made or guaranteed over $30 billion in loans that required monitoring. In its policy manuals, DOE recognizes the importance of monitoring loans and guarantees to proactively manage their risks and protect the financial interests of the federal government and the taxpayer. OMB guidance specifies that credit programs should have robust management and oversight frameworks. However, DOE has been monitoring its loans since 2009 without the benefit of a fully- developed organizational structure because staffing, management and reporting software, and monitoring policies and procedures are still works in progress. The absence of a fully-developed organizational structure has resulted in inconsistent adherence to policies during a period of significant program events, including loan disbursements, borrower bankruptcies, and loan repayments involving billions of dollars. Because DOE inconsistently adhered to the policies it had in place, DOE’s assurance that it was completing activities critical to monitoring the loans has been limited. DOE has made progress since 2011 in developing its monitoring functions, but it has repeatedly missed internal deadlines for completing its loan monitoring policies and procedures. In the meantime, DOE has recently announced a new solicitation for up to $8 billion in loan guarantees for advanced fossil energy projects and issued two new loan guarantees for nuclear generation, adding $6.2 billion in loans to be overseen. In addition to a fully developed loan monitoring organization, evaluating the effectiveness of ongoing monitoring efforts is important to ensuring risks are being adequately managed in DOE’s loan programs. However, since the first loans were made, DOE has not conducted evaluations of its loan monitoring by performing the credit review, compliance, and reporting functions outlined in its 2011 policy manual. Such evaluations might have identified and addressed the inconsistent adherence to its policies that we identified. As DOE’s manual states, a failure to maintain a reliable and effective evaluation function is unsound and could expose DOE to loss or criticism. Given the high profile and large sums of money involved in DOE’s loan programs—more than $30 billion in loans and guarantees already made and approximately $45 billion in remaining loan and loan guarantee authority—this exposure is significant. To provide greater assurance that DOE is effectively monitoring its loans, we recommend that the Secretary of Energy direct the Executive Director of the Loan Programs Office to take the following four actions: Fully develop its organizational structure by updating management and reporting software, and staffing key monitoring positions, completing policies and procedures for loan monitoring and risk management. Evaluate the effectiveness of DOE’s monitoring by performing the credit review, compliance, and reporting functions outlined in the 2011 policy manual for DOE’s loan programs. We provided a draft of this report to DOE for review and comment. In its written comments, DOE generally agreed with our recommendations. DOE also said it disagreed with some statements in the draft report. It was difficult, however, for us to determine which statements DOE disagreed with because the comments focused on highlighting DOE’s monitoring efforts in four areas rather than specifying areas of disagreement. DOE’s written comments and our detailed responses can be found in appendix V of this report. DOE also provided technical comments that we incorporated, as appropriate. DOE noted several actions it is undertaking in response to our recommendations. Regarding its organizational structure, DOE stated it would continue to recruit and hire qualified managers and staff for its Portfolio Management and Risk Management Divisions; implement a second generation of its software by the end of the first quarter of 2015, as well as a new information and reporting system by the end of the third quarter of 2014; and continue to prepare and issue portfolio monitoring and risk management procedures and guidelines. However, DOE did not provide information on any plans for updating and completing its overall policy manual for the programs. We believe this action is needed because it would provide guidance on the organizational roles of the new Risk Management Division and address inconsistencies we found between the current manual and current DOE practices, such as those for troubled loans. Regarding evaluation of the effectiveness of DOE’s monitoring, DOE described several efforts for reviewing and monitoring the Loan Programs Office’s portfolio. However, DOE did not indicate that it plans to prepare the required reports to evaluate the effectiveness of its loan monitoring. We are sending copies of this report to the Secretary of Energy, the appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This appendix details the methods we used to examine the Department of Energy’s (DOE) Loan Programs Office. The 2007 Revised Continuing Appropriations Resolution mandates that GAO review DOE’s execution of the Loan Guarantee Program (LGP) and report its findings to the House and Senate Committees on Appropriations. Because DOE is administering the LGP and Advanced Technology Vehicles Manufacturing (ATVM) loan program through one Loan Programs Office, we included both programs in this review. For this report, we assessed the extent to which DOE has developed and adhered to loan monitoring policies for its loan programs. U.S. Department of Energy Loan Programs Office, Credit Policies and Procedures, Title XVII of the Energy Policy Act of 2005 (Washington, D.C.: Oct. 6, 2011). January 2014, DOE is drafting a new policy manual in order to reflect current practices and supersede the separate manuals for both programs. Moreover, we conducted semistructured interviews with DOE staff to ensure that our understanding of these policies and procedures was complete and accurate. To assess the extent to which DOE adhered to its monitoring policies, we acquired and analyzed documentation from a nonprobability sample of 10 of the 36 loans and loan guarantees that had been made by March 2013, therefore, requiring monitoring. The use of a nonprobability sample means that we are unable to generalize our findings to the loans and loan guarantees not in our sample, but we are able to make observations about DOE’s monitoring activities for the diverse set of 10 loans and guarantees. The loans and loan guarantees were chosen to cover projects involving a range of technologies, construction statuses, credit watch list statuses, loan or guarantee amounts, dates of loan finalization, and amounts disbursed. We examined relevant project files, including disbursement records, plans for troubled loans, and credit reports. We requested all disbursement records and plans for managing troubled loans for the 10 sample loans. DOE began producing credit reports in August 2011, so we examined all credit reports for the 10 sample loans that were produced between August 2011, and May 2013, when we completed our data collection. We compared these files with selected DOE policies to determine where the guidance was followed and where it was not, as well as the level of consistency in monitoring across projects. We did not review all DOE policies, rather only those most directly associated with the 10 activities identified in our summary. In some cases, our review of documentation was limited by the fact that DOE’s detailed procedures remained under development. In addition, to provide context, we compared DOE’s monitoring policies with those of private lenders. We conducted semistructured interviews with eight experts (four private lenders, three academic experts, and one industry expert) about private lender monitoring policies and compared the information they provided with DOE’s policies and the 10 activities we identified. We selected a nonprobability sample of four private lenders financing similar projects to those in the LGP and ATVM loan program, using additional criteria such as the value of loans issued and number of loans issued. The use of a nonprobability sample in this case means that we are unable to generalize the information they provided to the private lenders not in our sample, but we are able to make observations about how DOE’s polices compare with those lenders’ descriptions of their own and industry-wide practices. Our primary source in identifying lenders was a search of the lenders most active in financing large innovative energy and advanced vehicle projects in the Bloomberg New Energy Finance database. We discussed general policies and practices with these private lenders because they were unable to share specific written policies and procedures. We identified academic experts through a literature search of relevant academic articles on project finance loan monitoring practices and the financing of innovative energy technologies, and we then contacted the most frequently cited academics. In addition, we reviewed a study conducted by KPMG on behalf of the Export-Import Bank of the United States, which compiled leading monitoring practices from institutions across a number of sectors, including 10 private lenders, as well as several export credit and government agencies. We then interviewed the study’s author. We conducted this performance audit from March 2013 to April 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In order for us to compare the Department of Energy’s (DOE) monitoring efforts to those of private lenders, we summarized and categorized DOE’s monitoring into 10 activities based on DOE policy and discussions with DOE officials (see table 1). Taken together, the 10 activities cover the full duration of the loan, from the time loans have been made, through disbursement of funds, construction of the project, and the operation of the project, until final repayment, which can be up to 3 decades in the future. Some of the monitoring activities we identified are to be done for every loan. For example, prior to disbursing loan payments, DOE policies require that staff perform several steps to review the financial health of the project and the borrower, such as checking borrower documentation, performing technical reviews of the health of the project, and obtaining supervisory approval prior to processing payment. During construction, DOE policy directs loan monitoring and technical staff to review project financial and technical documents to ensure that the project is progressing toward its construction goals. DOE policy for another activity we identified directs DOE staff to monitor and report on the financial health of the project and prepare reports about the project’s financial information, among other things. Other monitoring activities are applied only if the internal or external risk to the financed project increases. Two of these 10 activities address this possibility: (1) assessing potential actions for loans with increasing risks and (2) managing troubled loans. As part of managing troubled loans, DOE officials are to determine whether it is preferable to make changes in a troubled loan’s structure, such as restructuring the project in a way that might reduce risk, or to seek an outside entity to purchase the loan and take on the risk, among other options. To develop this summary of DOE’s monitoring activities, we examined high-level DOE policy guidance—specifically DOE’s October 2011 policy manual for its Loan Guarantee Program, its 2009 policy manual for its Advanced Technology Vehicles Manufacturing (ATVM) loan program, DOE’s approved implementing procedure documents, and DOE loan monitoring documentation. In addition, we reviewed a draft policy manual intended to unify guidance for both loan programs (more detailed information about how we summarized these activities is available in app. I). Table 1 summarizes the 10 monitoring activities we identified that are described by DOE Loan Programs Office policies and procedures. Arizona Solar One, LLC (aka Abengoa Solar, Inc; Solana) Genesis Solar, LLC Granite Reliable Power, LLC Great Basin Transmission South, LLC (aka SWIP/On Line) High Plains Ranch II, LLC (aka Sunpower Corp CA Valley Solar Ranch) Mojave Solar LLC (aka Abengoa Solar Mojave) Solar generation NGP Blue Mountain I, LLC Geothermal OFC 2, LLC (aka Ormat) Stephentown Regulation Services, LLC (aka Beacon Power Corporation) Tonopah Solar Energy (aka Solar Reserve, LLC) Defaulted/auctioned (subsequently bankrupt) Defaulted/auctioned (subsequently restructured by the purchaser) In order to provide context and better understand the Department of Energy’s (DOE) loan monitoring, we compared DOE policies with those of private lenders that finance large energy projects. We conducted semistructured interviews with eight experts about private lenders’ monitoring policies and compared the results with DOE’s policies and the 10 activities we identified. For more information on our methodology see appendix I. For the activities in which DOE has established policies, those policies generally align with those of private lenders. More specifically, our discussions with experts and our review of DOE’s policies indicate that DOE’s general monitoring activities, frequency of monitoring, actions taken when risk appears to be increasing, and organizational structures were all generally similar to those of private lenders. For example, both DOE policies and private lenders described various monitoring activities to oversee borrowers including periodic reviews of borrower information, independent engineering reviews of projects’ progress and expenditures, site visits to projects to monitor construction, and the tracking of borrowers’ compliance with loan agreements. In one instance, DOE’s monitoring policies appear more rigorous than many private lenders’. Specifically, both DOE’s and private lenders’ policies call for independent engineers for technical expertise and oversight, but DOE also has engineers on staff who oversee the independent engineers and advise DOE’s loan portfolio managers. While both DOE and private lenders were similar in having separate risk management functions, we could not directly compare DOE’s reporting relationship for its Risk Management Division with that of the private sector because of the incomplete policies and evolving nature of the Risk Management Division, as well as differences between a private lender and a government agency—such as the structure of overall management and the greater diversity of missions assigned to DOE than to a bank. The following are GAO’s comments on the letter from the Department of Energy dated April 18, 2014. 1. DOE states that its disbursement monitoring is fully developed and that DOE has not made disbursements without confirming that all required conditions have been satisfied and obtaining all necessary internal approvals. We found that DOE generally adhered to its policies related to disbursements, documenting analyses, and recording approvals, as we report on page 10. However, as we point out in the report, other aspects of DOE’s loan monitoring were still under development while DOE was disbursing funds. 2. DOE states that its 2011 policy manual requires the agency to prepare loan review reports for each project, at least annually, and that the agency has prepared quarterly, semiannual, and annual reports for all projects since the 2011 manual was issued. The manual in effect prior to 2011 also required periodic credit reports, so we looked for all reports prepared for the 10 loans in our sample. As we report, DOE did not start preparing any periodic credit reports until 2011, and 24 of 88 required periodic credit reports were missing for the 10 loans. Of these missing reports, 4 were due after the 2011 policy manual was issued. DOE also states that credit reports are not the only tool in determining borrower health and capacity to repay the DOE-supported loan. We agree that credit reports are not the only tool, but DOE’s policy manual indicates their importance. Specifically, it states that reports will enhance the credit monitoring process by providing (1) an early warning signal of potential credit issues, (2) a basis for potential loan restructuring, (3) a basis for reassessing credit risk, and (4) an information source for inquiries. 3. DOE states that it prepared comprehensive analyses and presentations that served as workout plans for the two loans mentioned by GAO and that consolidation of these analyses and presentations into a single document would not have changed the overall effectiveness of the workout plans. We did not evaluate whether this approach was more or less effective than the stated approach in DOE’s policies, which call for a formal workout action plan that must be approved by management. As we note in our report, DOE did not follow its policies in that area. We also note DOE’s statements that they are revising the policy manual to better comport with best practices in this field and that DOE has been operating under draft implementing procedures since June 2012. The mismatch between DOE’s written policies and its actual practices highlights the importance of our recommendation for DOE to complete its policies and procedures. 4. DOE describes several actions it has taken and has under way for reviewing and monitoring the Loan Programs Office’s portfolio. We note in the report that DOE has conducted some internal assessments and begun portfolio-wide reporting on risks, but that DOE was not adhering to its policies for evaluating the effectiveness of its loan monitoring. Specifically, DOE’s 2011 policy manual requires “a formal Credit Review and Compliance report” to be issued quarterly, but DOE officials told us that none had been produced, again underscoring the importance of our recommendation in this area. In addition to the individual named above, Karla Springer, Assistant Director; Marcia Carlsen; Lee Carroll; Cindy Gilbert; Ryan Gottschall; Armetha Liles; Eric Miller; Cynthia Norris; Madhav Panwar; Lindsay Read; Barbara Timmerman; Jarrod West; and Steve Westley made key contributions to this report. Federal Support for Renewable and Advanced Energy Technologies. GAO-13-514T. Washington, D.C.: April 16, 2013. Department of Energy: Status of Loan Programs. GAO-13-331R. Washington, D.C.: March 15, 2013. DOE Loan Guarantees: Further Actions Are Needed to Improve Tracking and Review of Applications. GAO-12-157. Washington, D.C.: March 12, 2012. Department of Energy: Advanced Technology Vehicle Loan Program Implementation Is Under Way, but Enhanced Technical Oversight and Performance Measures Are Needed. GAO-11-145. Washington, D.C.: February 28, 2011. Department of Energy: Further Actions Are Needed to Improve DOE’s Ability to Evaluate and Implement the Loan Guarantee Program. GAO-10-627. Washington, D.C.: July 12, 2010. Department of Energy: New Loan Guarantee Program Should Complete Activities Necessary for Effective and Accountable Program Management. GAO-08-750. Washington, D.C.: July 7, 2008. Department of Energy: Observations on Actions to Implement the New Loan Guarantee Program for Innovative Technologies. GAO-07-798T. Washington, D.C.: April 24, 2007. The Department of Energy: Key Steps Needed to Help Ensure the Success of the New Loan Guarantee Program for Innovative Technologies by Better Managing Its Financial Risk. GAO-07-339R. Washington, D.C.: February 28, 2007. | DOE's Loan Programs Office administers the Loan Guarantee Program (LGP) for certain renewable or innovative energy projects and the Advanced Technology Vehicles Manufacturing (ATVM) loan program for projects to produce more fuel-efficient vehicles and components. As of March 2014, the programs had made more than $30 billion in loans and guarantees: approximately $21.9 billion for 33 loan guarantees under the LGP and $8.4 billion for 5 loans under the ATVM loan program. Both programs can expose the government and taxpayers to substantial financial risks should borrowers default. GAO assessed the extent to which DOE has developed and adhered to loan monitoring policies for its loan programs for 2009 to 2013. GAO analyzed relevant regulations and guidance; prior audits; DOE policies; and DOE data, documents, and monitoring reports for a nonprobability sample of 10 loans and guarantees. Findings from the sample are not generalizable, but the sample covered a range of technologies and loan statuses. GAO also interviewed DOE officials. The Department of Energy (DOE) has not fully developed or consistently adhered to loan monitoring policies for its loan programs. In particular, DOE has established policies for most loan monitoring activities, but policies for evaluating and mitigating program-wide risk remain incomplete and outdated. These activities are generally the responsibility of the Risk Management Division in DOE's Loan Programs Office. This division, established in February 2012, has been operating since its inception under incomplete or outdated policies. DOE has missed several internal deadlines for updating its loan monitoring policies. DOE officials told GAO that updated policies were delayed in part because the Loan Programs Office did not have a Director of Risk Management until November 2012. Additionally, the Risk Management Division had not staffed 11 of its 16 planned positions until late 2013, when it staffed 6 of the 11 vacancies. Under federal guidance, credit programs should have robust management and oversight frameworks for monitoring the programs' progress toward achieving policy goals within acceptable risk thresholds, and taking action where appropriate to increase efficiency and effectiveness. It is difficult to determine whether DOE is adequately managing risk if policies are outdated or incomplete and key monitoring positions are not fully staffed. In some cases GAO examined, DOE generally adhered to the loan monitoring policies that it had in place. For example, DOE generally adhered to its policies for authorizing disbursement of funds to borrowers. But, in other cases, DOE adhered to the policies inconsistently or not at all because the Loan Programs Office had staff vacancies and was still developing management and reporting software and procedures for implementing policies. For example: DOE inconsistently adhered to its policies for monitoring and reporting on credit risk, particularly for preparing credit reports—periodic reviews of project progress and factors that may affect the borrower's ability to meet the terms of the loan. DOE did not prepare dozens of credit reports, mostly in 2011, because according to officials it had not filled positions or fully developed the software needed for producing these reports. DOE inconsistently adhered to its policies for managing troubled loans requiring that it prepare and approve plans for handling loans to borrowers in danger of defaulting on their loan repayments. For two troubled loans, officials said DOE did not prepare a formal plan, as called for in its policy, in part because implementing procedures were incomplete. DOE did not adhere to its policy requiring it to evaluate the effectiveness of its loan monitoring because of continuing staff vacancies. Without conducting these evaluations, DOE management cannot assess the adequacy of its monitoring efforts and thus be reasonably assured that it is effectively managing risks associated with its loan programs. As a result, DOE was making loans and disbursing funds from 2009 through 2013 without a fully developed loan monitoring function. During this time, inconsistent adherence to policies limited assurance that DOE was completing activities important to monitoring the loans and protecting the government's interest. GAO recommends that DOE (1) staff key positions, (2) update management and reporting software, (3) complete policies for loan monitoring, and (4) evaluate the effectiveness of its loan monitoring. DOE generally agreed with the recommendations. |
Over the past two decades, several efforts have been launched to improve federal government accountability and results, such as the strategic plans and annual performance reports required under the Government Performance and Results Act of 1993 (GPRA). The act was designed to provide executive and congressional decision makers with objective information on the relative effectiveness and efficiency of federal programs and spending. In 2002, the Office of Management and Budget (OMB) introduced the Program Assessment Rating Tool (PART) as a key element of the budget and performance integration initiative under President George W. Bush’s governmentwide Management Agenda. PART is a standard set of questions meant to serve as a diagnostic tool, drawing on available program performance and evaluation information to form conclusions about program benefits and recommend adjustments that may improve results. The success of these efforts has been constrained by lack of access to credible evidence on program results. We previously reported that the PART review process has stimulated agencies to increase their evaluation capacity and available information on program results. After 4 years of PART reviews, however, OMB rated 17 percent of 1,015 programs “results not demonstrated”—that is, did not have acceptable performance goals or performance data. Many federal programs, while tending to have limited evaluation resources, require program evaluation studies, rather than performance measures, in order to distinguish a program’s effects from those of other influences on outcomes. Program evaluations are systematic studies that assess how well a program is working, and they are individually tailored to address the client’s research question. Process (or implementation) evaluations assess the extent to which a program is operating as intended. Outcome evaluations assess the extent to which a program is achieving its outcome- oriented objectives but may also examine program processes to understand how outcomes are produced. When external factors such as economic or environmental conditions are known to influence a program’s outcomes, an impact evaluation may be used in an attempt to measure a program’s net effect by comparing outcomes with an estimate of what would have occurred in the absence of the program intervention. A number of methodologies are available to estimate program impact, including experimental and nonexperimental designs. Concern about the quality of social program evaluation has led to calls for greater use of randomized experiments—a method used more widely in evaluations of medical than social science interventions. Randomized controlled trials (or randomized experiments) compare the outcomes for groups that were randomly assigned either to the treatment or to a nonparticipating control group before the intervention, in an effort to control for any systematic difference between the groups that could account for a difference in their outcomes. A difference in these groups’ outcomes is believed to represent the program’s impact. While random assignment is considered a highly rigorous approach in assessing program effectiveness, it is not the only rigorous research design available and is not always feasible. The Coalition for Evidence-Based Policy is a private, nonprofit organization that was sponsored by the Council for Excellence in Government from 2001 until the Council closed in 2009. The Coalition aims to improve the effectiveness of social programs by encouraging federal agencies to fund rigorous studies—particularly randomized controlled trials—to identify effective interventions and to provide strong incentives and assistance for federal funding recipients to adopt such interventions. Coalition staff have advised OMB and federal agencies on how to identify rigorous evaluations of program effectiveness, and they manage a Web site called “Social Programs That Work” that provides examples of evidence-based programs to “provide policymakers and practitioners with clear, actionable information on what works, as demonstrated in scientifically-valid studies. . . .” In 2008, the Coalition launched a similar but more formal effort, the Top Tier Evidence initiative, to identify only interventions that have been shown in “well-designed and implemented randomized controlled trials, preferably conducted in typical community settings, to produce sizeable, sustained benefits to participants and/or society.” At the same time, it introduced an advisory panel of evaluation researchers and former government officials to make the final determination. The Coalition has promoted the adoption of this criterion in legislation to direct federal funds toward strategies supported by rigorous evidence. By identifying interventions meeting this criterion, the Top Tier Evidence initiative aims to assist agencies, grantees, and others in implementing such provisions effectively. Because of the flexibility provided to recipients of many federal grants, achieving these federal programs’ goals relies heavily on agencies’ ability to influence their state and local program partners’ choice of activities. In the past decade, several public and private efforts have been patterned after the evidence-based practice model in medicine to summarize available effectiveness research on social interventions to help managers and policymakers identify and adopt effective practices. The Department of Education, HHS, and Department of Justice support six initiatives similar to the Coalition’s to identify effective social interventions. These initiatives conduct systematic searches for and review the quality of evaluations of intervention effectiveness in a given field and have been operating for several years. We examined the processes used by these six ongoing federally supported efforts to identify effective interventions in order to provide insight into the choices of procedures and criteria that other independent organizations made in attempting to achieve a similar outcome as the Top Tier initiative: to identify interventions with rigorous evidence of effectiveness. The Top Tier initiative, however, aims to identify not all effective interventions but only those supported by the most definitive evidence of effectiveness. The processes each of these initiatives (including Top Tier) takes to identify effective interventions are summarized in appendix I. In 1997, the Agency for Healthcare Research and Quality (AHRQ) established the Evidence-based Practice Centers (EPC) (there are currently 14) to provide evidence on the relative benefits and risks of a wide variety of health care interventions to inform health care decisions. EPCs perform comprehensive reviews and synthesize scientific evidence to compare health treatments, including pharmaceuticals, devices, and other types of interventions. The reviews, with a priority on topics that impose high costs on the Medicare, Medicaid, or State Children’s Health Insurance (SCHIP) programs, provide evidence about effectiveness and harms and point out gaps in research. The reviews are intended to help clinicians and patients choose the best tests and treatments and to help policy makers make informed decisions about health care services and quality improvement. HHS established the Guide to Community Preventive Services (the Community Guide) in 1996 to provide evidence-based recommendations and findings about public health interventions and policies to improve health and promote safety. With the support of the Centers for Disease Control and Prevention (CDC), the Community Guide synthesizes the scientific literature to identify the effectiveness, economic efficiency, and feasibility of program and policy interventions to promote community health and prevent disease. The Task Force on Community Preventive Services, an independent, nonfederal, volunteer body of public health and prevention experts, guides the selection of review topics and uses the evidence gathered to develop recommendations to change risk behaviors, address environmental and ecosystem challenges, and reduce disease, injury, and impairment. Intended users include public health professionals, legislators and policy makers, community-based organizations, health care service providers, researchers, employers, and others who purchase health care services. CDC established the HIV/AIDS Prevention Research Synthesis (PRS) in 1996 to review and summarize HIV behavioral prevention research literature. PRS conducts systematic reviews to identify evidence-based HIV behavioral interventions with proven efficacy in preventing the acquisition or transmission of HIV infection (reducing HIV-related risk behaviors, sexually transmitted diseases, HIV incidence, or promoting protective behaviors). These reviews are intended to translate scientific research into practice by providing a compendium of evidence-based interventions to HIV prevention planners and providers and state and local health departments for help with selecting interventions best suited to the needs of the community. The Office of Juvenile Justice and Delinquency Prevention established the Model Programs Guide (MPG) in 2000 to identify effective programs to prevent and reduce juvenile delinquency and related risk factors such as substance abuse. MPG conducts reviews to identify effective intervention and prevention programs on the following topics: delinquency; violence; youth gang involvement; alcohol, tobacco, and drug use; academic difficulties; family functioning; trauma exposure or sexual activity and exploitation; and accompanying mental health issues. MPG produces a database of intervention and prevention programs intended for juvenile justice practitioners, program administrators, and researchers. The Substance Abuse and Mental Health Services Administration (SAMHSA) established the National Registry of Evidence-based Programs and Practices (NREPP) in 1997 and provides the public with information about the scientific basis and practicality of interventions that prevent or treat mental health and substance abuse disorders. NREPP reviews interventions to identify those that promote mental health and prevent or treat mental illness, substance use, or co-occurring disorders among individuals, communities, or populations. NREPP produces a database of interventions that can help practitioners and community-based organizations identify and select interventions that may address their particular needs and match their specific capacities and resources. The Institute of Education Sciences established the What Works Clearinghouse (WWC) in 2002 to provide educators, policymakers, researchers, and the public with a central source of scientific evidence on what improves student outcomes. WWC reviews research on the effectiveness of replicable educational interventions (programs, products, practices, and policies) to improve student achievement in areas such as mathematics, reading, early childhood education, English language, and dropout prevention. The WWC Web site reports information on the effectiveness of interventions through a searchable database and summary reports on the scientific evidence. The Coalition provides a clear public description on its Web site of the first two phases of its process—search and selection to identify candidate interventions. It primarily searches other evidence-based practice Web sites and solicits nominations from experts and the public. Staff post their selection criteria and a list of the interventions and studies reviewed on their Web site. However, their public materials have not been as transparent about the criteria and process used in the second two phases of its process—review and synthesize study results to determine whether an intervention met the Top Tier criteria. Although the Coalition provides brief examples of the panel’s reasoning in making Top Tier selections, it has not fully reported the panel’s discussion of how to define sizable and sustained effects in the absence of detailed guidance or the variation in members’ overall assessments of the interventions. Through its Web site and e-mailed announcements, the Coalition has clearly described how it identified interventions by searching the strongest evidence category of 15 federal, state, and private Web sites profiling evidence-based practices and by soliciting nominations from federal agencies, researchers, and the general public. Its Web site posting clearly indicated the initiative’s search and selection criteria: (1) early childhood interventions (for ages 0–6) in the first phase of the initiative and interventions for children and youths (ages 7–18) in the second phase (starting in February 2009) and (2) interventions showing positive results in well-designed and implemented randomized experiments. Coalition staff then searched electronic databases and consulted with researchers to identify any additional randomized studies of the interventions selected for review. The July 2008 announcement of the initiative included its August 2007 “Checklist for Reviewing a Randomized Controlled Trial of a Social Program or Project, to Assess Whether It Produced Valid Evidence.” The Checklist describes the defining features of a well-designed and implemented randomized experiment: equivalence of treatment and control groups throughout the study, valid measurement and analysis, and full reporting of outcomes. It also defines a strong body of evidence as consisting of two or more randomized experiments or one large multisite study. In the initial phase (July 2008 through February 2009), Coalition staff screened studies of 46 early childhood interventions for design or implementation flaws and provided the advisory panel with brief summaries of the interventions and their results and reasons why they screened out candidates they believed clearly did not meet the Top Tier standard. Reasons for exclusion included small sample sizes, high sample attrition (both during and after the intervention), follow-up periods of less than 1 year, questionable outcome measures (for example, teachers’ reports of their students’ behavior), and positive effects that faded in later follow-up. Staff also excluded interventions that lacked confirmation of effects in a well-implemented randomized study. Coalition staff recommended three candidate interventions from their screening review; advisory panel members added two more for consideration after reviewing the staff summaries (neither of which was accepted as top tier by the full panel). While the Top Tier Initiative explains each of its screening decisions to program developers privately, on its Web site it simply posts a list of the interventions and studies reviewed, along with full descriptions of interventions accepted as top tier and a brief discussion of a few examples of the panel’s reasoning. The Top Tier initiative’s public materials are less transparent about the process and criteria used to determine whether an intervention met the Top Tier standard than about candidate selection. One panel member, the lead reviewer, explicitly rates the quality of the evidence on each candidate intervention using the Checklist and rating form. Coalition staff members also use the Checklist to review the available evidence and prepare detailed study reviews that identify any significant limitations. The full advisory panel then discusses the available evidence on the recommended candidates and holds a secret ballot on whether an intervention meets the Top Tier standard, drawing on the published research articles, the staff review, and the lead reviewer’s quality rating and Top Tier recommendation. The advisory panel discussions did not generally dispute the lead reviewer’s study quality ratings (on quality of overall design, group equivalence, outcome measures, and analysis reporting) but, instead, focused on whether the body of evidence met the Top Tier standard (for sizable, sustained effects on important outcomes in typical community settings). The Checklist also includes two criteria or issues that were not explicit in the initial statement of the Top Tier standard—whether the body of evidence showed evidence of effects in more than one site (replication) and provided no strong countervailing evidence. Because neither the Checklist nor the rating form provides definitions of how large a sizable effect should be, how long a sustained effect should last, or what constituted an important outcome, the panel had to rely on its professional judgment in making these assessments. Although a sizable effect was usually defined as one passing tests of statistical significance at the 0.05 level, panel members raised questions about whether particular effects were sufficiently large to have practical importance. The panel often turned to members with subject matter expertise for advice on these matters. One member cautioned against relying too heavily on the reported results of statistical tests, because some studies, by conducting a very large number of comparisons, appeared to violate the assumptions of those tests and, thus, probably identified some differences between experimental groups as statistically significant simply by chance. The Checklist originally indicated a preference for data on long-term outcomes obtained a year after the intervention ended, preferably longer, noting that “longer-term effects . . . are of greatest policy and practical importance.” Panel members disagreed over whether effects measured no later than the end of the second grade—at the end of the intervention— were sufficiently sustained and important to qualify as top tier, especially in the context of other studies that tracked outcomes to age 15 or older. One panel member questioned whether it was realistic to expect the effects of early childhood programs to persist through high school, especially for low-cost interventions; others noted that the study design did not meet the standard because it did not collect data a year after the intervention ended. In the end, a majority (but not all) of the panel accepted this intervention as top tier because the study found that effects persisted over all 3 program years, and they agreed to revise the language in the Checklist accordingly. Panel members disagreed on what constituted an important outcome. Two noted a pattern of effects in one study on cognitive and academic tests across ages 3, 5, 8, and 18. Another member did not consider cognitive tests an important enough outcome and pointed out that the effects diminished over time and did not lead to effects on other school-related behavioral outcomes such as special education placement or school drop- out. Another member thought it was unreasonable to expect programs for very young children (ages 1–3) to show an effect on a child at age 18, given all their other experiences in the intervening years. A concern related to judging importance was whether and how to incorporate the cost of the intervention into the intervention assessment. On one hand, there was no mention of cost in the Checklist or intervention rating form. On the other hand, panel members frequently raised the issue when considering whether they were comfortable recommending the intervention to others. One aspect of this was proportionality: they might accept an outcome of less policy importance if the intervention was relatively inexpensive but would not if it was expensive. Additionally, one panel member feared that an expensive intervention that required a lot of training and monitoring to produce results might be too difficult to successfully replicate in more ordinary settings. In the February 2009 meeting, it was decided that program cost should not be a criterion for Top Tier status but should be considered and reported with the recommendation, if deemed relevant. The panel discussed whether a large multisite experiment should qualify as evidence meeting the replication standard. One classroom-based intervention was tested by randomly assigning 41 schools nationwide. Because the unit of analysis was the school, results at individual schools were not analyzed or reported separately but were aggregated to form one experimental–control group comparison per outcome measure. Some panel members considered this study a single randomized experiment; others accepted it as serving the purpose of a replication, because effects were observed over a large number of different settings. In this case, limitations in the original study report added to their uncertainty. Some panel members stated that if they had learned that positive effects had been found in several schools rather than in only a few odd cases, they would have been more comfortable ruling this multisite experiment a replication. Because detailed guidance was lacking, panel members, relying on individual judgment, arrived at split decisions (4–3 and 3–5) on two of the first four early childhood interventions reviewed, and only one intervention received a unanimous vote. Panel members expressed concern that because some criteria were not specifically defined, they had to use their professional judgment yet found that they interpreted the terms somewhat differently. This problem may have been aggravated by the fact that, as one member noted, they had not had a “perfect winner” that met all the top tier criteria. Indeed, a couple of members expressed their desire for a second category, like “promising,” to allow them to communicate their belief in an intervention’s high quality, despite the fact that its evidence did not meet all their criteria. In a discussion of their narrow (4–3) vote at their next meeting (February 2009), members suggested that they take more time to discuss their decisions, set a requirement for a two-thirds majority agreement, or ask for votes from members who did not attend the meeting. The latter suggestion was countered with concern that absent members would not be aware of their discussion, and the issue was deferred to see whether these differences might be resolved with time and discussion of other interventions. Disagreement over Top Tier status was less a problem with later reviews, held in February and July 2009, when none of the votes on Top Tier status were split decisions and three of seven votes were unanimous. The Coalition reports that it plans to supplement guidance over time by accumulating case decisions rather than developing more detailed guidance on what constitutes sizable and sustained effects. The December 2008 and May 2009 public releases of the results of the Top Tier Evidence review of early childhood interventions provided brief discussion of examples of the panel’s reasoning for accepting or not accepting specific interventions. In May 2009, the Coalition also published a revised version of the Checklist that removed the preference for outcomes measured a year after the intervention ended, replacing it with a less specific reference: “over a long enough period to determine whether the intervention’s effects lasted at least a year, hopefully longer.” At the February 2009 meeting, Coalition staff stated that they had received a suggestion from external parties to consider introducing a second category of “promising” interventions that did not meet the top tier standard. Panel members agreed to discuss the idea further but noted the need to provide clear criteria for this category as well. For example, they said it was important to distinguish interventions that lacked good quality evaluations (and thus had unknown effectiveness) from those that simply lacked replication of sizable effects in a second randomized study. It was noted that broadening the criteria to include studies (and interventions) that the staff had previously screened out may require additional staff effort and, thus, resources beyond those of the current project. The Top Tier initiative’s criteria for assessing evaluation quality conform to general social science research standards, but other features of the overall process differ from common practice for drawing conclusions about intervention effectiveness from a body of research. The initiative’s choice of a broad topic fails to focus the review on how to achieve a specific outcome. Its narrow evidence criteria yield few recommendations and limited information on what works to inform policy and practice decisions. The Top Tier and all six of the agency-supported review initiatives we examined assess evaluation quality on standard dimensions to determine whether a study provides credible evidence on effectiveness. These dimensions include the quality of research design and execution, the equivalence of treatment and comparison groups (as appropriate), adequacy of samples, the validity and reliability of outcome measures, and appropriateness of statistical analyses and reporting. Some initiatives included additional criteria or gave greater emphasis to some issues than others. The six agency-supported initiatives also employed several features to ensure the reliability of their quality assessments. In general, assessing the quality of an impact evaluation’s study design and execution involves considering how well the selected comparison protects against the risk of bias in estimating the intervention’s impact. For random assignment designs, this primarily consists of examining whether the assignment process was truly random, the experimental groups were equivalent before the intervention, and the groups remained separate and otherwise equivalent throughout the study. For other designs, the reviewer must examine the assignment process even more closely to detect whether a potential source of bias (such as higher motivation among volunteers) may have been introduced that could account for any differences observed in outcomes between the treatment and comparison groups. In addition to confirming the equivalence of the experimental groups at baseline, several review initiatives examine the extent of crossover or “contamination” between experimental groups throughout the study because this could blur the study’s view of the intervention’s true effects. All seven review initiatives we examined assess whether a study’s sample size was large enough to detect effects of a meaningful size. They also assess whether any sample attrition (or loss) over the course of the study was severe enough to question how well the remaining members represented the original sample or whether differential attrition may have created significant new differences between the experimental groups. Most review forms ask whether tests for statistical significance of group differences accounted for key study design features (for example, random assignment of groups rather than individuals), as well as for any deviations from initial group assignment (intention-to-treat analysis). The rating forms vary in structure and detail across the initiatives. For example, “appropriateness of statistical analyses” can be found under the category “reporting of the intervention’s effects” on one form and in a category by itself on another form. In the Model Programs Guide rating form, “internal validity”—or the degree to which observed changes can be attributed to the intervention—is assessed through how well both the research design and the measurement of program activities and outcomes controlled for nine specific threats to validity. The EPC rating form notes whether study participants were blind to the experimental groups they belonged to—standard practice in studies for medical treatments but not as common in studies of social interventions, while the PRS form does not directly address study blinding in assessing extent of bias in forming study groups. The major difference in rating study quality between the Top Tier initiative and the six other initiatives is a product of the top tier standard as set out in certain legislative provisions: the other initiatives accept well-designed, well-conducted quasi-experimental studies as credible evidence. Most of the federally supported initiatives recognize well-conducted randomized experiments as providing the most credible evidence of effectiveness by assigning them their highest rating for quality of research design, but three do not require them for interventions to receive their highest evidence rating: EPC, the Community Guide, and National Registry of Evidence- based Programs and Practices (NREPP). The Coalition has, since its inception, promoted randomized experiments as the highest-quality, unbiased method for assessing an intervention’s true impact. Federal officials provided a number of reasons for including well-conducted quasi- experimental studies: (1) random assignment is not feasible for many of the interventions they studied, (2) study credibility is determined not by a particular research design but by its execution, (3) evidence from carefully controlled experimental settings may not reflect the benefits and harms observed in everyday practice, and (4) too few high-quality, relevant random assignment studies were available. The Top Tier initiative states a preference for studies that test interventions in typical community settings over those run under ideal conditions but does not explicitly assess the quality (or fidelity) of program implementation. The requirement that results be shown in two or more randomized studies is an effort to demonstrate the applicability of intervention effects to other settings. However, four other review initiatives do explicitly assess intervention fidelity—the Community Guide, MPG, NREPP, and PRS—through either describing in detail the intervention’s components or measuring participants’ level of exposure. Poor implementation fidelity can weaken a study’s ability to detect an intervention’s potential effect and thus lessen confidence in the study as a true test of the intervention model. EPC and the Community Guide assess how well a study’s selection of population and setting matched those in which it is likely to be applied; any notable differences in conditions would undermine the relevance or generalizability of study results to what can be expected in future applications. All seven initiatives have experienced researchers with methodological and subject matter expertise rate the studies and use written guidance or codebooks to help ensure ratings consistency. Codebooks varied but most were more detailed than the Top Tier Checklist. Most of the initiatives also provided training to ensure consistency of ratings across reviewers. In each initiative, two or more reviewers rate the studies independently and then reach consensus on their ratings in consultation with other experts (such as consultants to or supervisors of the review). After the Top Tier initiative’s staff screening review, staff and one advisory panel member independently review the quality of experimental evidence available on an intervention, before the panel as a group discussed and voted on whether it met the top tier standard. However, because the panel members did not independently rate study quality or the body of evidence, it is unknown how much of the variation in their overall assessment of the interventions reflected differences in their application of the criteria making up the Top Tier standard. The Top Tier initiative’s topic selection, emphasis on long-term effects, and narrow evidence criteria combine to provide limited information on the effectiveness of approaches for achieving specific outcomes. It is standard practice in research and evaluation syntheses to pose a clearly defined research question—such as, Which interventions have been found effective in achieving specific outcomes of interest for a specific population?—and then assemble and summarize the credible, relevant studies available to answer that question. A well-specified research question clarifies the objective of the research and guides the selection of eligibility criteria for including studies in a systematic evidence review. In addition, some critics of systematic reviews in health care recommend using the intervention’s theoretical framework or logic model to guide analyses toward answering questions about how and why an intervention works when it does. Evaluators often construct a logic model—a diagram showing the links between key intervention components and desired results—to explain the strategy or logic by which it is expected to achieve its goals. The Top Tier initiative’s approach focuses on critically appraising and summarizing the evidence without having first formulated a precise, unambiguous research question and the chain of logic underlying the interventions’ hypothesized effects on the outcomes of interest. Neither of the Top Tier initiative’s topic selections—interventions for children ages 0–6 or youths ages 7–18—identify either a particular type of intervention, such as preschool or parent education, or a desired outcome, such as healthy cognitive and social development or prevention of substance abuse, that can frame and focus a review as in the other effectiveness reviews. The other initiatives have a clear purpose and focus: learning what has been effective in achieving a specific outcome or set of outcomes (for example, reducing youth involvement in criminal activity). Moreover, recognizing that an intervention might be successful on one outcome but not another, EPC, NREPP, and WWC rate the effectiveness of an intervention by each outcome. Even EPC, whose scope is the broadest of the initiatives we reviewed, focuses individual reviews by selecting a specific healthcare topic through a formal process of soliciting and reviewing nominations from key stakeholders, program partners, and the public. Their criteria for selecting review topics include disease burden for the general population or a priority population (such as children), controversy or uncertainty over the topic, costs associated with the condition, potential impact for improving health outcomes or reducing costs, relevance to federal health care programs, and availability of evidence and reasonably well-defined patient populations, interventions, and outcome measures. The Top Tier initiative’s emphasis on identifying interventions with long- term effects—up to 15 years later for some early childhood interventions—also leads away from focusing on how to achieve a specific outcome and could lead to capitalizing on chance results. A search for interventions with “sustained effects on important life outcomes,” regardless of the content area, means assembling results on whatever outcomes—special education placement, high school graduation, teenage pregnancy, employment, or criminal arrest—the studies happen to have measured. This is of concern because it is often not clear why some long- term outcomes were studied for some interventions and not others. Moreover, focusing on the achievement of long-term outcomes, without regard to the achievement of logically related short-term outcomes, raises questions about the meaning and reliability of those purported long-term program effects. For example, without a logic model or hypothesis linking preschool activities to improving children’s self-control or some other intermediate outcome, it is unclear why one would expect to see effects on their delinquent behavior as adolescents. Indeed, one advisory panel member raised questions about the mechanism behind long-term effects measured on involvement in crime when effects on more conventional (for example, academic) outcomes disappeared after a few years. Later, he suggested that the panel should consider only outcomes the researcher identified as primary. Coalition staff said that reporting chance results is unlikely because the Top Tier criteria require the replication of results in multiple (or multi-site) studies, and they report any nonreplicated findings as needing confirmation in another study. Unlike efforts to synthesize evaluation results in some systematic evidence reviews, the Top Tier initiative examines evidence on each intervention independently, without reference to similar interventions or, alternatively, to different interventions aimed at the same goal. Indeed, of the initiatives we reviewed, only EPC and the Community Guide directly compare the results of several similar interventions to gain insight into the conditions under which an approach may be successful. (WWC topic reports display effectiveness ratings by outcome for all interventions they reviewed in a given content area, such as early reading, but do not directly compare their approaches.) These two initiatives explicitly aim to build knowledge about what works in an area by developing logic models in advance to structure their evaluation review by defining the specific populations and outcome measures of interest. A third, MPG, considers the availability of a logic model and the quality of an intervention’s research base in rating the quality of its evidence. Where appropriate evidence is available, EPCs conduct comparative effectiveness studies that directly compare the effectiveness, appropriateness, and safety of alternative approaches (such as drugs or medical procedures) to achieving the same health outcome. Officials at the other initiatives explained that they did not compare or combine results from different interventions because they did not find them similar enough to treat as replications of the same approach. However, most initiatives post the results of their reviews on their Web sites by key characteristics of the intervention (for example, activities or setting), outcomes measured, and population, so that viewers can search for particular types of interventions or compare their results. The Top Tier initiative’s narrow primary criterion for study design quality—randomized experiments only—diverges from the other initiatives and limits the types of interventions they considered. In addition, the exclusivity of its top tier standard also diverges from the more common approach of rating the credibility of study findings along a continuum and resulted in the panel’s recommending only 6 of 63 interventions for ages 0– 18 reviewed as providing “sizable, sustained effects on important life outcomes.” Thus, although they are not their primary audience, the Top Tier initiative provides practitioners with limited guidance on what works. Two basic dimensions are assessed in effectiveness reviews: (1) the credibility of the evidence on program impact provided by an individual study or body of evidence, based on research quality and risk of bias in the individual studies, and (2) the size and consistency of effects observed in those studies. The six other evidence reviews report the credibility of the evidence on the interventions’ effectiveness in terms of their level of confidence in the findings—either with a numerical score (0 to 4, NREPP) or on a scale (high, moderate, low, or insufficient, EPC). Scales permit an initiative to communicate intermediate levels of confidence in an intervention’s results and to distinguish approaches with “promising” evidence from those with clearly inadequate evidence. Federal officials from initiatives using this more inclusive approach indicated that they believed that it provides more useful information and a broader range of choices for practitioners and policy makers who must decide which intervention is most appropriate and feasible for their local setting and available resources. To provide additional guidance to practitioners looking for an intervention to adopt, NREPP explicitly rates the interventions’ readiness for dissemination by assessing the quality and availability of implementation materials, resources for training and ongoing support, and the quality assurance procedures the program developer provides. Some initiatives, like Top Tier, provide a single rating of the effectiveness of an intervention by combining ratings of the credibility and size (and consistency, if available) of intervention effects. However, combining scores creates ambiguity in an intermediate strength of evidence rating—it could mean that reviewers found strong evidence of modest effects or weak evidence of strong effects. Other initiatives report on the credibility of results and the effect sizes separately. For example, WWC reports three summary ratings for an intervention’s result on each outcome measured: an improvement index, providing a measure of the size of the intervention’s effect; a rating of effectiveness, summarizing both study quality and the size and consistency of effects; and an extent of evidence rating, reflecting the number and size of effectiveness studies reviewed. Thus, the viewer can scan and compare ratings on all three indexes in a list of interventions rank-ordered by the improvement index before examining more detailed information about each intervention and its evidence of effectiveness. In our review of the literature on program evaluation methods, we found general agreement that well-conducted randomized experiments are best suited for assessing intervention effectiveness where multiple causal influences lead to uncertainty about what has caused observed results but, also, that they are often difficult to carry out. Randomized experiments are considered best suited for interventions in which exposure to the intervention can be controlled and the treatment and control groups’ experiences remain separate, intact, and distinct throughout the study. The evaluation methods literature also describes a variety of issues to consider in planning an evaluation of a program or of an intervention’s effectiveness, including the expected use of the evaluation, the nature and implementation of program activities, and the resources available for the evaluation. Selecting a methodology follows, first, a determination that an effectiveness evaluation is warranted. It then requires balancing the need for sufficient rigor to draw firm conclusions with practical considerations of resources and the cooperation and protection of participants. Several other research designs are generally considered good alternatives to randomized experiments, especially when accompanied by specific features that help strengthen conclusions by ruling out plausible alternative explanations. In reviewing the literature on evaluation research methods, we found that randomized experiments are considered appropriate for assessing intervention effectiveness only after an intervention has met minimal requirements for an effectiveness evaluation—that the intervention is important, clearly defined, and well-implemented and the evaluation itself is adequately resourced. Conducting an impact evaluation of a social intervention often requires the expenditure of significant resources to both collect and analyze data on program results and estimate what would have happened in the absence of the program. Thus, impact evaluations need not be conducted for all interventions but reserved for when the effort and cost appear warranted. There may be more interest in an impact evaluation when the intervention addresses an important problem, there is interest in adopting the intervention elsewhere, and preliminary evidence suggests its effects may be positive, if uncertain. Of course, if the intervention’s effectiveness were known, then there would be no need for an evaluation. And if the intervention was known or believed to be ineffective or harmful, then it would seem wasteful as well as perhaps unethical to subject people to such a test. In addition to federal regulations concerning the protection of human research subjects, the ethical principles of relevant professional organizations require evaluators to try to avoid subjecting study participants to unreasonable risk, harm, or burden. This includes obtaining their fully informed consent. An impact evaluation is more likely to provide useful information about what works when the intervention consists of clearly defined activities and goals and has been well implemented. Having clarity about the nature of intended activities and evidence that critical intervention components were delivered to the intended targets helps strengthen confidence that those activities caused the observed results; it also improves the ability to replicate the results in another study. Confirming that the intervention was carried out as designed helps rule out a common explanation for why programs do not achieve their goals; when done before collecting expensive outcome data, it can also avoid wasting resources. Obtaining agreement with stakeholders on which outcomes to consider in defining success also helps ensure that the evaluation’s results will be credible and useful to its intended audience. While not required, having a well- articulated logic model can help ensure shared expectations among stakeholders and define measures of a program’s progress toward its ultimate goals. Regardless of the evaluation approach, an impact evaluation may not be worth the effort unless the study is adequately staffed and funded to ensure the study is carried out rigorously. If, for example, an intervention’s desired outcome consists of participants’ actions back on the job after receiving training, then it is critical that all reasonable efforts are made to ensure that high-quality data on those actions are collected from as many participants as possible. Significant amounts of missing data raises the possibility that the persons reached are different from those who were not reached (perhaps more cooperative) and thus weakens confidence that the observed results reflect the true effect of the intervention. Similarly, it is important to invest in valid and reliable measures of desired outcomes to avoid introducing error and imprecision that could blur the view of the intervention’s effect. We found in our review of the literature on evaluation research methods that randomized experiments are considered best suited for assessing intervention effectiveness where multiple causal influences lead to uncertainty about program effects and it is possible, ethical, and practical to conduct and maintain random assignment to minimize the effect of those influences. As noted earlier, when factors other than the intervention are expected to influence change in the desired outcome, the evaluator cannot be certain how much of any observed change reflects the effect of the intervention, as opposed to what would have occurred anyway without it. In contrast, controlled experiments are usually not needed to assess the effects of simple, comparatively self-contained processes like processing income tax returns. The volume and accuracy of tax returns processed simply reflect the characteristics of the returns filed and the agency’s application of its rules and procedures. Thus, any change in the accuracy of processed returns is likely to result from change in the characteristics of either the returns or the agency’s processes. In contrast, an evaluation assessing the impact of job training on participants’ employment and earnings would need to control for other major influences on those outcomes—features of the local job market and the applicant pool. In this case, randomly assigning job training applicants (within a local job market) to either participate in the program (forming the treatment group) or not participate (forming the control group) helps ensure that the treatment and control groups will be equally affected. Random assignment is, of course, suited only to interventions in which the evaluator or program manager can control whether a person, group, or other entity is enrolled in or exposed to the intervention. Control over program exposure rules out the possibility that the process by which experimental groups are formed (especially, self-selection) may reflect preexisting differences between them that might also affect the outcome variable and, thus, obscure the treatment effect. For example, tobacco smokers who volunteer for a program to quit smoking are likely to be more highly motivated than tobacco smokers who do not volunteer. Thus, smoking cessation programs should randomly assign volunteers to receive services and compare them to other volunteers who do not receive services to avoid confounding the effects of the services with the effects of volunteers’ greater motivation. Random assignment is well suited for programs that are not universally available to the entire eligible population, so that some people will be denied access to the intervention in any case. This addresses one concern about whether a control group experiment is ethical. In fact, in many field settings, assignment by lottery has often been considered the most equitable way to assign individuals to participate in programs with limits on enrollment. Randomized experiments are especially well suited to demonstration programs for which a new approach is tested in a limited way before committing to apply it more broadly. Another ethical concern is that the control group should not be harmed by withholding needed services, but this can be averted by providing the control group with whatever services are considered standard practice. In this case, however, the evaluation will no longer be testing whether a new approach is effective at all; it will test whether it is more effective than standard practice. Random assignment is also best suited for interventions in which the treatment and control groups’ experiences remain separate, intact, and distinct throughout the life of the study so that any differences in outcomes can be confidently attributed to the intervention. It is important that control group participants not access comparable treatment in the community on their own (referred to as contamination). Their doing so could blur the distinction between the two groups’ experiences. It is also preferred that control group and treatment group members not communicate, because knowing that they are being treated differently might influence their perceptions of their experience and, thus, their behavior. Sometimes people selected for an experimental treatment are motivated by the extra attention they receive; sometimes those not selected are motivated to work harder to compete with their peers. Thus, random assignment works best when participants have no strong beliefs about the advantage of the intervention being tested and information about their experimental status is not publicly known. For example, in comparing alternative reading curriculums in kindergarten classrooms, an evaluator needs to ensure that the teachers are equally well trained and do not have preexisting conceptions about the “better” curriculum. Sometimes this is best achieved by assigning whole schools—rather than individuals or classes—to the treatment and control groups, but this can become very expensive, since appropriate statistical analyses now require about as many schools to participate in a study as the number of classes participating in the simpler design. Interventions are well suited for random assignment if the desired outcomes occur often enough to be observed with a reasonable sample size or study length. Studies of infrequent but not rare outcomes—for example, those occurring about 5 percent of the time—may require moderately large samples (several hundred) to allow the detection of a difference between the experimental and control groups. Because of the practical difficulties of maintaining intact experimental groups over time, randomized experiments are also best suited for assessing outcomes that occur within 1 to 2 years after the intervention, depending on the circumstances. Although an intervention’s key desired outcome may be a social, health, or environmental benefit that takes 10 or more years to fully develop, it may be prohibitively costly to follow a large enough proportion of both experimental groups over that time to ensure reliable results. Evaluators may then rely on intermediate outcomes, such as high-school graduation, as an adequate outcome measure rather than accepting the costs of directly measuring long-term effects on adult employment and earnings. Random assignment is not appropriate for a range of programs in which one cannot meet the requirements that make this strategy effective. They include entitlement programs or policies that apply to everyone, interventions that involve exposure to negative events, or interventions for which the evaluator cannot be sure about the nature of differences between the treatment and control groups’ experiences. For a few types of programs, random assignment to the intervention is not possible. One is when all eligible individuals are exposed to the intervention and legal restrictions do not permit excluding some people in order to form a comparison group. This includes entitlement programs such as veterans’ benefits, Social Security, and Medicare, as well as programs operating under laws and regulations that explicitly prohibit (or require) a particular practice. A second type of intervention for which random assignment is precluded is broadcast media communication where the individual—rather than the researcher—controls his or her exposure (consciously or not). This is true of radio, television, billboard, and Internet programming, in which the individual chooses whether and how long to hear or view a message or communication. To evaluate the effect of advertising or public service announcements in broadcast media, the evaluator is often limited to simply measuring the audience’s exposure to it. However, sometimes it is possible to randomly assign advertisements to distinct local media markets and then compare their effects to other similar but distinct local markets. A third type of program for which random assignment is generally not possible is comprehensive social reforms consisting of collective, coordinated actions by various parties in a community—whether school, organization, or neighborhood. In these highly interactive initiatives, it can be difficult to distinguish the activities and changes from the settings in which they take place. For example, some community development partnerships rely on increasing citizen involvement or changing the relationships between public and private organizations in order to foster conditions that are expected to improve services. Although one might randomly assign communities to receive community development support or not, the evaluator does not control who becomes involved or what activities take place, so it is difficult to trace the process that led to any observed effects. Random assignment is often not accepted for testing interventions that prevent or mitigate harm because it is considered unethical to impose negative events or elevated risks of harm to test a remedy’s effectiveness. Thus, one must wait for a hurricane or flood, for example, to learn if efforts to strengthen buildings prevented serious damage. Whether the evaluator is able to randomly apply different approaches to strengthening buildings may depend on whether the approaches appear to be equally likely to be successful in advance of a test. In some cases, the possibility that the intervention may fail may be considered an unacceptable risk. When evaluating alternative treatments for criminal offenders, local law enforcement officers may be unwilling to assign the offenders they consider to be the most dangerous to the less restrictive treatments. As implied by the previous discussion of when random assignment is well suited, it may simply not be practical in a variety of circumstances. It may not be possible to convince program staff to form control groups by simple random assignment if it would deny services to some of the neediest individuals while providing service to some of the less needy. For example, individual tutoring in reading would usually be provided only to students with the lowest reading scores. In other cases, the desired outcome may be so rare or take so long to develop that the required sample sizes or prospective tracking of cases over time would be prohibitively expensive. Finally, the evaluation literature cautions that as social interventions become more complex, representing a diverse set of local applications of a broad policy rather than a common set of activities, randomized experiments may become less informative. When how much of the intervention is actually delivered, or how it is expected to work, is influenced by characteristics of the population or setting, one cannot be sure about the nature of the difference between the treatment and control group experiences or which factors influenced their outcomes. Diversity in the nature of the intervention can occur at the individual level, as when counselors draw on their experience to select the approach they believe is most appropriate for each patient. Or it can occur at a group level, as when grantees of federal flexible grant programs focus on different subpopulations as they address the needs of their local communities. In these cases, aggregating results over substantial variability in what the intervention entails may end up providing little guidance on what, exactly, works. In our review of the literature on evaluation research methods, we identified several alternative methods for assessing intervention effectiveness when random assignment is not considered appropriate— quasi-experimental comparison group studies, statistical analyses of observational data, and in-depth case studies. Although experts differed in their opinion of how useful case studies are for estimating program impacts, several other research designs are generally considered good alternatives to randomized experiments, especially when accompanied by specific features that help strengthen conclusions by ruling out plausible alternative explanations. Quasi-experimental comparison group designs resemble randomized experiments in comparing the outcomes for treatment and control groups, except that individuals are not assigned to those groups randomly. Instead, unserved members of the targeted population are selected to serve as a control group that resembles the treatment group as much as possible on variables related to the desired outcome. This evaluation design is used with partial coverage programs for which random assignment is not possible, ethical, or practical. It is most successful in providing credible estimates of program effectiveness when the groups are formed in parallel ways and not based on self-selection—for example, by having been turned away from an oversubscribed service or living in a similar neighborhood where the intervention is not available. This approach requires statistical analyses to establish groups’ equivalence at baseline. Regression discontinuity analysis compares outcomes for a treatment and control group that are formed by having scores above or below a cut-point on a quantitative selection variable rather than through random assignment. When experimental groups are formed strictly on a cut-point and group outcomes are analyzed for individuals close to the cut-point, the groups are left otherwise comparable except for the intervention. This technique is used where those considered most “deserving” are assigned to treatment, in order to address ethical concerns about denying services to those in need—for example, when additional tutoring is provided only to children with the lowest reading scores. The technique requires a quantitative assignment variable that users believe is a credible selection criterion, careful control over assignment to ensure that a strict cut-point is achieved, large sample sizes, and sophisticated statistical analysis. Interrupted time-series analysis compares trends in repeated measures of an outcome for a group before and after an intervention or policy is introduced, to learn if the desired change in outcome has occurred. Long data series are used to smooth out the effects of random fluctuations over time. Statistical modeling of simultaneous changes in important external factors helps control for their influence on the outcome and, thus, helps isolate the impact of the intervention. This approach is used for full- coverage programs in which it may not be possible to form or find an untreated comparison group, such as for change in state laws defining alcohol impairment of motor vehicle drivers (“blood alcohol concentration” laws). But because the technique relies on the availability of comparable information about the past—before a policy changed—it may be limited to use near the time of the policy change. The need for lengthy data series means it is typically used where the evaluator has access to long-term, detailed government statistical series or institutional records. Observational or cross-sectional studies first measure the target population’s level of exposure to the intervention rather than controlling its exposure and then comparing the outcomes of individuals receiving different levels of the intervention. Statistical analysis is used to control for other plausible influences. Level of exposure to the intervention can be measured by whether one was enrolled or how often one participated or heard the program message. This approach is used with full-coverage programs, for which it is impossible to directly form treatment and control groups; nonuniform programs, in which individuals receive different levels of exposure (such as to broadcast media); and interventions in which outcomes are observed too infrequently to make a prospective study practical. For example, an individual’s annual risk of being in a car crash is so low that it would be impractical to randomly assign (and monitor) thousands of individuals to use (or not use) their seat belts in order to assess belts’ effectiveness in preventing injuries during car crashes. Because there is no evaluator control over assignment to the intervention, this approach requires sophisticated statistical analyses to limit the influence of any concurrent events or preexisting differences that may be associated with why people had different exposure to the intervention. Case studies have been recommended for assessing the effectiveness of complex interventions in limited circumstances when other designs are not available. In program evaluation, in-depth case studies are typically used to provide descriptive information on how an intervention operates and produces outcomes and, thus, may help generate hypotheses about program effects. Case studies may also be used to test a theory of change, as when the evaluator specifies in advance the expected processes and outcomes, based on the program theory or logic model, and then collects detailed observations carefully designed to confirm or refute that model. This approach has been recommended for assessing comprehensive reforms that are so deeply integrated with the context (for example, the community) that no truly adequate comparison case can be found. To support credible conclusions about program effects, the evaluator must make specific, refutable predictions of program effects and introduce controls for, or provide strong arguments against, other plausible explanations for observed effects. However, because a single case study most likely cannot provide credible information on what would have happened in the absence of the program, our experts noted that the evaluator cannot use this design to reliably estimate the magnitude of a program’s effect. Reviewing the literature and consulting with evaluation experts, we identified additional measurement and design features that can help strengthen conclusions about an intervention’s impact from both randomized and nonrandomized designs. In general, they involve collecting additional data and targeting comparisons to help rule out plausible alternative explanations of the observed results. Since all evaluation methods have limitations, our confidence in concluding that an intervention is effective is strengthened when the conclusion is supported by multiple forms of evidence. Although collecting baseline data is an integral component of the statistical approaches to assessing effectiveness discussed above, both experiments and quasi-experiments would benefit from including pretest measures on program outcomes as well as other key variables. First, by chance, random assignment may not produce groups that are equivalent on several important variables known to correlate with program outcomes, so their baseline equivalence should always be checked. Second, in the absence of random assignment, ensuring the equivalence of the treatment and control groups on measures related to the desired outcome is critical. The effects of potential self-selection bias or other preexisting differences between the treatment and control groups can be minimized through selection modeling or “propensity score analysis.” Essentially, one first develops a statistical model of the baseline differences between the individuals in the treatment and comparison groups on a number of important variables and then adjusts the observed outcomes for the initial differences between the groups to identify the net effect of the intervention. Extending data collection either before or after the intervention can help rule out the influence of unrelated historical trends on the outcomes of interest. This is in principle similar to interrupted time-series analysis, yielding more observations to allow analysis of trends in outcomes over time in relation to the timing of program activities. For example, one could examine whether the outcome measure began to change before the intervention could plausibly have affected it, in which case the change was probably influenced by some other factor. Another way to attempt to rule out plausible alternative explanations for observed results is to measure additional outcomes that are or are not expected to be influenced by the treatment, based on program theory. If one can predict a relatively unique pattern of expected outcomes for the intervention, in contrast to an alternative explanation, and if the study confirms that pattern, then the alternative explanation becomes less plausible. In comparison group studies, the nature of the effect one detects is defined by the nature of the differences between the experiences of the treatment and control groups. For example, if the comparison group receives no assistance at all in gaining employment, then the evaluation can detect the full effect of all the employment assistance (including child care) the treatment group receives. But if the comparison group also receives child care, then the evaluation can detect only the effect, or value added, of employment assistance above and beyond the effect of child care. Thus, one can carefully design comparisons to target specific questions or hypotheses about what is responsible for the observed results and control for specific threats to validity. For example, in evaluating the effects of providing new parents of infants with health consultation and parent training at home, the evaluator might compare them to another group of parents receiving only routine health check-ups to control for the level of attention the first group received and test the value added by the parent training. Sometimes the evaluator can capitalize on natural variations in exposure to the intervention and analyze the patterns of effects to learn more about what is producing change. For example, little or no change in outcomes for dropouts—participants who left the program—might reflect either the dropouts’ lower levels of motivation compared to other participants or their reduced exposure to the intervention. But if differences in outcomes are associated with different levels of exposure for administrative reasons (such as scheduling difficulties at one site), then those differences may be more likely to result from the intervention itself. As reflected in all the review initiatives we identified for this report, conclusions drawn from findings across multiple studies are generally considered more convincing than those based on a single study. The two basic reasons for this are that (1) each study is just one example of many potential experiences with an intervention, which may or may not represent that broader experience, and (2) each study employs one particular set of methods to measure an intervention’s effect, which may be more or less likely than other methods to detect an effect. Thus, an analysis that carefully considers the results of diverse studies of an intervention is more likely to accurately identify when and for whom an intervention is effective. A recurring theme in the evaluation literature is the tradeoffs made in constructing studies to rigorously identify program impact by reducing the influence of external factors. Studies of interventions tested in carefully controlled settings, a homogenous group of volunteer participants, and a comparison group that receives no services at all may not accurately portray the results that can be expected in more typical operations. To obtain a comprehensive, realistic picture of intervention effectiveness, reviewing the results of several studies conducted in different settings and populations, or large multisite studies, may help ensure that the results observed are likely to be found, or replicated, elsewhere. This is particularly important when the characteristics of settings, such as different state laws, are expected to influence the effectiveness of a policy or practice applied nationally. For example, states set limits on how much income a family may have while receiving financial assistance, and these limits—which vary considerably from state to state—strongly influence the proportion of a state’s assistance recipients who are currently employed. Thus, any federal policy regarding the employment of recipients is likely to affect one state’s caseload quite differently from that of another. Because every research method has inherent limitations, it is often advantageous to combine multiple measures or two or more designs in a study or group of studies to obtain a more comprehensive picture of an intervention. In addition to choosing whether to measure intermediate or long-term outcomes, evaluators may choose to collect, for example, student self-reports of violent behavior, teacher ratings of student disruptive behavior, or records of school disciplinary actions or referrals to the criminal justice system, which might yield different results. While randomized experiments are considered best-suited for assessing intervention impact, blended study designs can provide supplemental information on other important considerations of policy makers. For example, an in-depth case study of an intervention could be added to develop a deeper understanding of its costs and implementation requirements or to track participants’ experiences to better understand the intervention’s logic model. Alternatively, a cross-sectional survey of an intervention’s participants and activities can help in assessing the extent of its reach to important subpopulations. The Coalition provides a valuable service in encouraging government adoption of interventions with evidence of effectiveness and in drawing attention to the importance of evaluation quality in assessing that evidence. Reliable assessments of the credibility of evaluation results require expertise in research design and measurement, but their reliability can be improved by providing detailed guidance and training. The Top Tier initiative provides another useful model in that it engages experienced evaluation experts to make these quality assessments. Requiring evidence from randomized experiments as sole proof of an intervention’s effectiveness is likely to exclude many potentially effective and worthwhile practices for which random assignment is not practical. The broad range of studies assessed by the six federally supported initiatives we examined demonstrates that other research designs can provide rigorous evidence of effectiveness if designed well and implemented with a thorough understanding of their vulnerability to potential sources of bias. Assessing the importance of an intervention’s outcomes entails drawing a judgment from subject matter expertise—the evaluator must understand the nature of the intervention, its expected effects, and the context in which it operates. Defining the outcome measures of interest in advance, in consultation with program stakeholders and other interested audiences, may help ensure the credibility and usefulness of a review’s results. Deciding to adopt an intervention involves additional considerations— cost, ease of use, suitability to the local community, and available resources. Thus, practitioners will probably want information on these factors and on effectiveness when choosing an approach. A comprehensive understanding of which practices or interventions are most effective for achieving specific outcomes requires a synthesis of credible evaluations that compares the costs and benefits of alternative practices across populations and settings. The ability to identify effective interventions would benefit from (1) better designed and implemented evaluations, (2) more detailed reporting on both the interventions and their evaluations, and (3) more evaluations that directly compare alternative interventions. The Coalition for Evidence-Based Policy provided written comments on a draft of this report, reprinted in appendix II. The Coalition stated it was pleased with the report’s key findings on the transparency of its process and its adherence to rigorous standards in assessing research quality. While acknowledging the complementary value of well-conducted nonrandomized studies as part of a research agenda, the Coalition believes the report somewhat overstates the confidence one can place in such studies alone. The Coalition and the Departments of Education and Health and Human Services provided technical comments that were incorporated as appropriate throughout the text. The Department of Justice had no comments. We are sending copies of this report to the Secretaries of Education, Justice, and Health and Human Services; the Director of the Office of Management and Budget; and appropriate congressional committees. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you have questions about this report, please contact me at (202) 512- 2700 or [email protected]. Contacts for our offices of Congressional Relations and Public Affairs are on the last page. Key contributors are listed in appendix III. Nancy Kingsbury, Ph.D Managing Director Applied Research and Methods . Select Randomized and quasi- Observational studies (e.g., cohort, case control) Body of evidence on each outcome is scored on four domains: risk of bias, consistency, directness, and precision of effects. Strength of evidence for each outcome is classified as High Moderate 2. Guide to Community Preventive Services at the Centers for Disease Control and Prevention Select Randomized and quasi- Observational studies (e.g., time series, case control) Validity and reliability of outcome measures Data analysis and reporting Assessment of harm 4. Model Programs Guide at the Office of Juvenile Justice and Delinquency Prevention Select randomized and quasi- experimental studies with one or more positive outcomes and documentation of program implementation (fidelity) Summary research quality ratings (0–4) are provided for statistically significant outcomes. Interventions themselves are not rated. In addition to the person named above, Stephanie Shipman, Assistant Director, and Valerie Caracelli made significant contributions to this report. Agency for Healthcare Research and Quality. Systems to Rate the Strength of Scientific Evidence: Summary. Evidence Report/Technology Assessment No. 47. Rockville, Md.: U.S. Department of Health and Human Services, March 2002. www.ahrq.gov/clinic/epcsums/strengthsum.htm Auspos, Patricia, and Anne C. Kubisch. Building Knowledge about Community Change: Moving beyond Evaluations. New York: The Aspen Institute, 2004. Berk, Richard A. Randomized Experiments as the Bronze Standard. California Center for Population Research On-Line Working Paper Series CCPR-030-05. Los Angeles: August 2005. http://repositories.cdlib.org/uclastat/papers/2005080201 Boruch, Robert. “Encouraging the Flight of Error: Ethical Standards, Evidence Standards, and Randomized Trials.” New Directions for Evaluation no. 113 (spring 2007): 55–73. Boruch, Robert F., and Ellen Foley. “The Honestly Experimental Society: Sites and Other Entities as the Units of Allocation and Analysis in Randomized Trials.” In Validity and Social Experimentation: Donald Campbell’s Legacy, Leonard Bickman, ed. Thousand Oaks, Calif.: Sage, 2000. Campbell, Donald T., and Julian C. Stanley. Experimental and Quasi- experimental Designs for Research. Chicago: Rand McNally, 1966. Chalmers, Iain. “Trying to Do More Good Than Harm in Policy and Practice: The Role of Rigorous, Transparent, Up-to-date Evaluations.” The Annals of the American Academy of Political and Social Science 589 (2003): 22–40. Cook, Thomas D. “Randomized Experiments in Educational Policy Research: A Critical Examination of the Reasons the Educational Evaluation Community Has Offered for Not Doing Them.” Educational Evaluation and Policy Analysis 24:3 (2002): 175–99. European Evaluation Society. EES Statement: The Importance of a Methodologically Diverse Approach to Impact Evaluation—Specifically with Respect to Development Aid and Development Interventions. Nijkerk, The Netherlands: December 2007. www.europeanevaluation.org Flay, Brian R., and others. “Standards of Evidence: Criteria for Efficacy, Effectiveness, and Dissemination.” Prevention Science 6:3 (2005): 151–75. Fulbright-Anderson, Anne C. Kibisch, and James P. Connell, eds. New Approaches to Evaluating Community Initiatives. Vol. 2. Theory, Measurement, and Analysis. Washington, D.C.: The Aspen Institute, 1998. Glazerman, Steven, Dan M. Levy, and David Myers. “Nonexperimental versus Experimental Estimates of Earnings Impacts.” The Annals of the American Academy of Political and Social Science 589 (2003): 63–93. Institute of Medicine. Knowing What Works in Health Care: A Roadmap for the Nation. Washington, D.C.: The National Academies Press, 2008. Jackson, N., and E. Waters. “Criteria for the Systematic Review of Health Promotion and Public Health Interventions.” For the Guidelines for Systematic Reviews in Health Promotion and Public Health Task Force. Health Promotion International 20:4 (2005): 367–74. Julnes, George, and Debra J. Rog. “Pragmatic Support for Policies on Methodology.” New Directions for Evaluation no. 113 (spring 2007): 129– 47. Mark, Melvin M., and Charles S. Reichardt. “Quasi-experimental and Correlational Designs: Methods for the Real World When Random Assignment Isn’t Feasible.” In The Sage Handbook of Methods in Social Psychology, Carol Sansone, Carolyn C. Morf, and A. T. Panter, eds. Thousand Oaks, Calif.: Sage, 2004. Moffitt, Robert A. “The Role of Randomized Field Trials in Social Science Research: A Perspective from Evaluations of Reforms of Social Welfare Programs.” The American Behavioral Scientist 47:5 (2004): 506–40. National Research Council, Center for Education. Scientific Research in Education. Washington, D.C.: National Academies Press, 2002. National Research Council, Committee on Law and Justice. Improving Evaluation of Anticrime Programs. Washington, D.C.: National Academies Press, 2005. Orr, Larry L. Social Experiments: Evaluating Public Programs with Experimental Methods. Thousand Oaks, Calif.: Sage, 1999. Posavac, Emil J., and Raymond G. Carey. Program Evaluation: Methods and Case Studies, 6th ed. Upper Saddle River, N.J.: Prentice Hall, 2003. Rossi, Peter H., Mark W. Lipsey, and Howard E. Freeman. Evaluation: A Systematic Approach, 7th ed. Thousand Oaks, Calif.: Sage, 2004. Trochim, William M. K. President, American Evaluation Association, Chair, AEA Evaluation Policy Task Force. Letter to Robert Shea, Associate Director for Administration and Government Performance, Office of Management and Budget, Washington, D.C., March 7, 2008, and attachment, “Comments on ‘What Constitutes Strong Evidence of a Program’s Effectiveness?’” www.eval.org/EPTF.asp Victora, Cesar G., Jean-Pierre Habicht, and Jennifer Bryce. “Evidence- Based Public Health: Moving beyond Randomized Trials.” American Journal of Public Health 94:3 (March 2004): 400–05. West, Stephen, and others. “Alternatives to the Randomized Controlled Trial.” American Journal of Public Health 98:8 (August 2008): 1359–66. Juvenile Justice: Technical Assistance and Better Defined Evaluation Plans Will Help to Improve Girls’ Delinquency Programs. GAO-09-721R. Washington, D.C.: July 24, 2009. Health-Care-Associated Infections in Hospitals: Leadership Needed from HHS to Prioritize Prevention Practices and Improve Data on These Infections. GAO-08-283. Washington, D.C.: March 31, 2008. School Mental Health: Role of the Substance Abuse and Mental Health Services Administration and Factors Affecting Service Provision. GAO-08-19R. Washington, D.C.: October 5, 2007. Abstinence Education: Efforts to Assess the Accuracy and Effectiveness of Federally Funded Programs. GAO-07-87. Washington, D.C.: October 3, 2006. Program Evaluation: OMB’s PART Reviews Increased Agencies’ Attention to Improving Evidence of Program Results. GAO-06-67. Washington, D.C.: October 28, 2005. Program Evaluation: Strategies for Assessing How Information Dissemination Contributes to Agency Goals. GAO-02-923. Washington, D.C.: September 30, 2002. The Evaluation Synthesis. GAO/PEMD-10.1.2. Washington, D.C.: March 1992. Designing Evaluations. GAO/PEMD-10.1.4. Washington, D.C.: March 1991. Case Study Evaluations. GAO/PEMD-10.1.9. Washington, D.C.: November 1990. | Recent congressional initiatives seek to focus funds for certain federal social programs on interventions for which randomized experiments show sizable, sustained benefits to participants or society. The private, nonprofit Coalition for Evidence-Based Policy undertook the Top Tier Evidence initiative to help federal programs identify interventions that meet this standard. The Government Accountability Office (GAO) was asked to examine (1) the validity and transparency of the Coalition's process, (2) how its process compared to that of six federally supported efforts to identify effective interventions, (3) the types of interventions best suited for assessment with randomized experiments, and (4) alternative rigorous methods used to assess effectiveness. GAO reviewed documents, observed the Coalition's advisory panel deliberate on interventions meeting its top tier standard, and reviewed other documents describing the processes the federally supported efforts had used. GAO reviewed the literature on evaluation methods and consulted experts on the use of randomized experiments. The Coalition generally agreed with the findings. The Departments of Education and Health and Human Services provided technical comments on a draft of this report. The Department of Justice provided no comments. The Coalition's Top Tier Evidence initiative criteria for assessing evaluation quality conform to general social science research standards, but other features of its overall process differ from common practice for drawing conclusions about intervention effectiveness. The Top Tier initiative clearly describes how it identifies candidate interventions but is not as transparent about how it determines whether an intervention meets the top tier criteria. In the absence of detailed guidance, the panel defined sizable and sustained effects through case discussion. Over time, it increasingly obtained agreement on whether an intervention met the top tier criteria. The major difference in rating study quality between the Top Tier and the six other initiatives examined is a product of the Top Tier standard as set out in certain legislative provisions: the other efforts accept well-designed, well-conducted, nonrandomized studies as credible evidence. The Top Tier initiative's choice of broad topics (such as early childhood interventions), emphasis on long-term effects, and use of narrow evidence criteria combine to provide limited information on what is effective in achieving specific outcomes. The panel recommended only 6 of 63 interventions reviewed as providing "sizeable, sustained effects on important outcomes." The other initiatives acknowledge a continuum of evidence credibility by reporting an intervention's effectiveness on a scale of high to low confidence. The program evaluation literature generally agrees that well-conducted randomized experiments are best suited for assessing effectiveness when multiple causal influences create uncertainty about what caused results. However, they are often difficult, and sometimes impossible, to carry out. An evaluation must be able to control exposure to the intervention and ensure that treatment and control groups' experiences remain separate and distinct throughout the study. Several rigorous alternatives to randomized experiments are considered appropriate for other situations: quasi-experimental comparison group studies, statistical analyses of observational data, and--in some circumstances--in-depth case studies. The credibility of their estimates of program effects relies on how well the studies' designs rule out competing causal explanations. Collecting additional data and targeting comparisons can help rule out other explanations. GAO concludes that (1) requiring evidence from randomized studies as sole proof of effectiveness will likely exclude many potentially effective and worthwhile practices; (2) reliable assessments of evaluation results require research expertise but can be improved with detailed protocols and training; (3) deciding to adopt an intervention involves other considerations in addition to effectiveness, such as cost and suitability to the local community; and (4) improved evaluation quality would also help identify effective interventions. |
Students with limited English proficiency are a diverse and complex group. They speak many languages and have a tremendous range of educational needs and include refugees with little formal schooling and students who are literate in their native languages. Accurately assessing the academic knowledge of these students in English is challenging. If a student responds incorrectly to a test item, it may not be clear if the student did not know the answer or misunderstood the question because of language barriers. Several approaches are available to allow students to demonstrate their academic knowledge while they are becoming proficient in English, although each poses challenges. First, a state can offer assessments in a student’s native language. However, vocabulary in English is not necessarily equivalent in difficulty to the vocabulary in another language. As a result, a test translated from English may not have the same level of difficulty as the English version. If a state chooses to develop a completely different test in another language instead of translating the English version, the assessment should measure the same standards and reflect the same level of difficulty as the English version of the test to ensure its validity. Second, states can also offer accommodations, such as providing extra time to take a test, allowing the use of a bilingual dictionary, reading test directions aloud in a student’s native language, or administering the test in a less distracting environment. Accommodations alter the way a regular assessment is administered, with the goal of minimizing the language impediments faced by students with limited English proficiency; they are intended to level the playing field without providing an unfair advantage to these students. Finally, states can use alternate assessments that measure the same things as the regular assessment while minimizing the language burden placed on the student. For example, an alternate assessment can be a traditional standardized test that uses simplified English or relies more on pictures and diagrams. It can also be a portfolio of a student’s class work that demonstrates academic knowledge. In either case, studies would be needed to demonstrate that the alternate assessment is equivalent to the regular assessment. Title I of NCLBA seeks to ensure that all children have a fair and equal opportunity to obtain a high-quality education and become proficient in academic subjects. It requires states to administer tests in language arts and mathematics to all students in certain grades and to use these tests as the primary means of determining the annual performance of states, districts, and schools. These assessments must be aligned with the state’s academic standards—that is, they must measure how well a student has demonstrated his or her knowledge of the academic content represented in these standards. States are to show that increasing percentages of students are reaching the proficient level on these state tests over time. NCLBA also requires that students with limited English proficiency receive reasonable accommodations and be assessed, to the extent practicable, in the language and form most likely to yield accurate data on their academic knowledge. Somewhat similar versions of these provisions, such as reporting testing results for different student groups, had been included in legislation enacted in 1994. One new NCLBA requirement was for states to annually assess the English language proficiency of students identified as having limited English proficiency. Table 1 summarizes some key Title I provisions from NCLBA. Accurately assessing the academic knowledge of students with limited English proficiency has become more critical because NCLBA designated specific groups of students for particular focus. These four groups are students who (1) are economically disadvantaged, (2) represent major racial and ethnic groups, (3) have disabilities, and (4) are limited in English proficiency. These groups are not mutually exclusive, so that the results for a student who is economically disadvantaged, Hispanic, and has limited English proficiency could be counted in all three groups. States and school districts are required to measure the progress of all students in meeting academic proficiency goals, as well as to measure separately the progress of these designated groups. To be deemed as having made adequate yearly progress, generally each district and school must show that each of these groups met the state proficiency goal (that is, the percentage of students who have achieved the proficient level on the state’s assessments) and that at least 95 percent of students in each designated group participated in these assessments. Although NCLBA placed many new requirements on states, states have broad discretion in many key areas. States establish their academic content standards and then develop their own tests to measure the academic content students are taught in school. States also set their own standards for what constitutes proficiency on these assessments. In addition, states set their own annual progress goals for the percentage of students achieving proficiency, using guidelines outlined in NCLBA. Title III of NCLBA focuses specifically on students with limited English proficiency, with the purpose of ensuring that these students attain English proficiency and meet the same academic content standards all students are expected to meet. This title established new requirements intended to hold states and districts accountable for student progress in attaining English proficiency. It requires states to establish goals to demonstrate, among other things, annual increases in (1) students making progress in learning English and (2) students attaining English proficiency. Specifically, states must establish English language proficiency standards that are aligned with a state’s academic content standards. The purpose of these alignment requirements is to ensure that students are acquiring the academic language they will need to successfully participate in the classroom. Education also requires that a state’s English language proficiency assessment be aligned to its English language proficiency standards. While NCLBA requires states to administer academic assessments to students in specific grades, it requires states to administer an annual English language proficiency assessment to all students with limited English proficiency, from kindergarten to grade 12. See table 2 for summary of key Title III provisions. Language arts standards define the academic skills a student is expected to master, while English language proficiency standards define progressive levels of competence in the acquisition of English necessary to participate successfully in the classroom. Examples of standards for English language proficiency and language arts are provided in table 3. Under NCLBA, states, districts, and schools have two sets of responsibilities for students with limited English proficiency. As shown in figure 1, they are responsible for ensuring that these students make progress in learning English under Title III and that they become proficient in language arts and mathematics under Title I. Beginning with the 2004- 2005 school year, Education is required to annually review whether states have made adequate yearly progress (as defined by the state) for each of the student groups and have met their objectives for increasing the number or percentage of students who become proficient in English. NCLBA’s emphasis on validity and reliability reflects the fact that these concepts are among the most important in test development. Validity refers to whether the test measures what it is intended to measure. Reliability refers to whether or not a test yields consistent results across time and location and among different sections of the test. A test cannot be considered valid if it is unreliable. The Standards for Educational and Psychological Testing provide universally accepted guidance for the development and evaluation of high-quality, psychometrically sound assessments. They outline specific standards to be considered when assessing individuals with limited English proficiency, including (1) determining when language differences produce threats to the validity and reliability of test results, (2) providing information on how to use and interpret results when tests are used with linguistically diverse individuals, and (3) collecting the same evidence to support claims of validity for each linguistic subgroup as was collected for the population as a whole. Test development begins with determining the purpose of the test and the content to be measured by the test. NCLBA outlines several purposes of statewide assessments, including determining the yearly performance of schools and districts, interpreting individual student academic needs, and tracking the achievement of several groups of students. NCLBA requires that the content of statewide assessments reflects state standards in language arts and mathematics, but the specific skills measured can vary from state to state. For example, a language arts assessment could measure a student’s knowledge of vocabulary or ability to write a persuasive essay. Variations in purpose and content affect test design, as well as the analyses necessary to determine validity and reliability. After determining the purpose and content of the test, developers create test specifications, which delineate the format of the questions and responses, as well as the scoring procedures. Specifications may also indicate additional information, such as the intended difficulty of questions, the student population that will take the test, and the procedures for administering the test. These specifications subsequently guide the development of individual test questions. The quality of the questions is usually ascertained through review by knowledgeable educators and statistical analyses based on a field test of a sample of students—ideally the sample is representative of the overall target student population so the results will reflect how the questions will function when the test is administered to the population. These reviews typically evaluate a question’s quality, clarity, lack of ambiguity, and sometimes its sensitivity to gender or cultural issues; they are intended to ensure that differences in student performance are related to differences in student knowledge rather than other factors, such as unnecessarily complex language. Once the quality has been established, developers assemble questions into a test that meets the requirements of the test specifications. Developers often review tests after development to ensure that they continue to produce accurate results. Education has responsibility for general oversight of Titles I and III of NCLBA. The department’s Office of Elementary and Secondary Education oversees states’ implementation of Title I requirements with respect to academic assessments and making adequate progress toward achieving academic proficiency for all students by 2014. Education’s Office of English Language Acquisition, Language Enhancement and Academic Achievement for Limited English Proficient Students oversees states’ Title III responsibilities, which include administering annual English language proficiency assessments to students with limited English proficiency and demonstrating student progress in attaining English language proficiency. In school year 2003-2004, the percentage of students with limited English proficiency reported by states as scoring proficient on a state’s language arts and mathematics tests was lower than the state’s annual progress goals (established for all students) in nearly two-thirds of the 48 states for which we obtained data. Further, data from state mathematics tests showed that these students were generally achieving lower rates of academic proficiency than the total student population. However, factors other than student academic performance can influence whether a state meets its progress goals, such as which students a state includes in the limited English proficient group and how a state establishes its annual progress goals. Officials in our study states reported using several common approaches, including providing teacher training specific to the needs of limited English proficient students and using data to guide instruction and identify areas for improvement. In nearly two-thirds of the 48 states for which we obtained data, state data showed that the percentage of students with limited English proficiency scoring proficient on language arts and mathematics tests was below the annual progress goal set by the state for school year 2003-2004. Students with limited English proficiency met academic progress goals in language arts and mathematics in 17 states. In 31 states, state data indicated that these students missed the goals either for language arts or for both language arts and mathematics (see fig. 2). In 21 states, the percentage of proficient students in this group was below both the mathematics and the language arts proficiency goals. See appendix II for information on how adequate yearly progress measures are calculated. We also obtained additional data from 18 states to determine whether districts were meeting annual progress goals for students with limited English proficiency in school year 2003-2004. In 14 of the 18 states, however, we found that less than 40 percent of the districts in each state reported separate results for this group of students (see fig. 3). Districts only have to report progress results for a student group if a minimum number of students are included in the group. In Nebraska, for example, only 4 percent of districts reported progress goals for students with limited English proficiency. Except for Florida, Hawaii, and Nevada, less than half of the districts in each state reported separate results for this group of students. Even when districts do not have to report on students with limited English proficiency, however, the test scores for these students are included in the state’s overall progress measures. For those districts that reported results for students with limited English proficiency, district-level data showed that most districts in 13 of the 18 states met their mathematics progress goals for these students. For example, 67 percent of reporting districts in Nebraska and 99 percent of reporting districts in Texas met the state’s goals. In 4 states, less than half of the districts reporting results for these students met the state mathematics progress goals. Specifically, 26 percent of Alaska districts, 33 percent of Nevada districts, 48 percent of Oregon districts, and 48 percent of Florida districts met these goals. (See app. III for results from each of the 18 states.) In addition to looking at whether students with limited English proficiency met annual progress goals at the state and district level, we also examined achievement levels on state assessments for this group of students compared with the total student population (which also includes students with limited English proficiency). Looking at mathematics results reported by 49 states to Education, for example, in all but one state, we found that a lower percentage of students with limited English proficiency at the elementary school level achieved proficient scores, compared to the total student population in school year 2003-2004 (see app. IV for the results reported by the 49 states). Twenty-seven states reported that the total student population outperformed students with limited English proficiency by 20 percentage points or more. The differences among groups in the percentage of students achieving proficient scores varied across states. South Dakota, for example, reported a large achievement gap, with 37 percent of limited English proficient students scoring at the proficient level, compared to 78 percent for the entire student population. The gap was less pronounced in Texas, where 75 percent of students with limited English proficiency achieved proficient scores on the mathematics assessment, while 85 percent of the total student population did. In Louisiana, these students performed about the same as the total student population, with 58 percent of limited English proficient students scoring at the proficient level on the elementary mathematics assessment, compared to 57 percent of the total student population. We also found that, in general, a lower percentage of students with limited English proficiency achieved proficient test scores than other selected student groups (see table 4). All of the 49 states reported that these students achieved lower rates of proficiency than white students. The performance of limited English proficient students relative to the other student groups varied. In 37 states, for example, economically disadvantaged students outperformed students with limited English proficiency, while students with disabilities outperformed these students in 14 states. In 12 states, all the selected student groups outperformed students with limited English proficiency. Factors beyond student performance can influence the number of states, districts, and schools meeting progress goals for students with limited English proficiency. One factor that can affect a state or district’s ability to meet progress goals for this student group is the criteria states use to determine which students are counted as limited English proficient. Some states define limited English proficiency so that students may be more likely to stay in the group for a longer time, giving them more of an opportunity to develop the language skills necessary to demonstrate their academic knowledge on state academic assessments administered in English. On the basis of our review of state accountability plans, we found that some states removed students from the group after they have achieved proficiency on the state’s English language proficiency assessment, while other states continued to include these students until they met additional academic requirements, such as achieving proficient scores on the state’s language arts assessment. A number of states measured adequate yearly progress for students with limited English proficiency by including test scores for students for a set period of time after they were considered proficient in English, following Education’s policy announcement in February 2004 allowing such an approach. How rigorously a state defines the proficient level of academic achievement can also influence the ability of states, districts, and schools to meet their progress goals. States with less rigorous definitions of proficiency are more likely to meet their progress goals for students with limited English proficiency or any other student group than states with more stringent definitions. Comparing the performance of students from different states on a nationally administered assessment suggests that states differ in how rigorously they define proficiency. For example, eighth-grade students in Colorado and Missouri achieved somewhat similar scores on the National Assessment of Educational Progress in mathematics in 2003. Specifically, 34 percent of Colorado students scored proficient or above on this national assessment compared to 28 percent of Missouri students. On their own state assessments in 2003, however, 67 percent of Colorado students scored proficient or above, compared to just 21 percent in Missouri. These results may reflect, among other things, a difference in the level of rigor in the tests administered by these states. However, they may also be due in part to differences in what the national test measures versus what each of the state tests measure. The likelihood of a state, district, or school meeting its annual progress goals also depends, in part, on the proficiency levels of its students when NCBLA was enacted, as well as how the state sets its annual goals. States vary significantly in the percentage of students scoring at the proficient level on their academic assessments, so that states with lower proficiency levels must, on average, establish larger annual increases in proficiency levels to meet the 2014 goal. Some states planned for large increases every 2 to 3 years, while others set smaller annual increases. States that established smaller annual increases in their initial proficiency goals may be more likely to meet their progress goals at this time, compared with states that set larger annual increases. The use of statistical procedures, such as confidence intervals, can also affect whether a state, district, or school meets its progress goals. Education officials said that states use such procedures to improve the reliability of determinations about the performance of districts. According to some researchers, such methods may improve the validity of results because they help to account for the effect of small group sizes and year- to-year changes in student populations. Most states currently use some type of confidence interval to determine if a state or district has met its progress goals, according to the Center on Education Policy. A confidence interval establishes a range of proficiency levels around a state’s annual progress goal. If the percentage of students with limited English proficiency scoring proficient on a state’s academic assessments falls within that range, that group has made the annual progress goal. To help students with limited English proficiency progress academically, state and district officials in our 5 study states reported using somewhat similar strategies, many of which are also considered good practices for all students. Among the key factors cited by state and district officials for their success in working with this group were strong professional development focused on effective teaching strategies for students with limited English proficiency; school or district leadership that focuses on the needs of these students, such as providing sufficient resources to meet those needs and establishing high academic standards for these students; “data driven” decisions, such as using data strategically to identify students who are doing well and those who need more help, to identify effective instructional approaches, or to provide effective professional development; and efforts to work with parents to support the academic progress of their children. These approaches are similar to those used by “blue ribbon” schools— schools identified by Education as working successfully with all students to achieve strong academic outcomes. The qualities shared by these blue ribbon schools include professional development related to classroom instruction, strong school leadership and a vision that emphasizes high academic expectations and academic success for all students, using data to target instructional approaches, and parental involvement. While many blue ribbon schools have a high percentage of disadvantaged students, including those with limited English proficiency, their common approaches help them achieve student outcomes that place them among the top 10 percent of all schools in the state or that demonstrate dramatic improvement. Officials in all 5 of our study states stressed the importance of providing teachers with the training they need to work effectively with students with limited English proficiency. For example, state officials in North Carolina told us that they are developing a statewide professional development program to train mainstream teachers to present academic content material so that it is more understandable to students with limited English proficiency and to incorporate language development while teaching subjects such as mathematics and science. In one rural North Carolina school district where students with limited English proficiency have only recently become a large presence, district officials commented that this kind of professional development has helped teachers become more comfortable with these students and given them useful strategies to work more effectively with them. In 4 of our study states, officials emphasized the need for strong school or district leadership that focuses on the needs of students with limited English proficiency. For example, officials in a California school district with a high percentage of students with limited English proficiency told us that these students are a district priority and that significant resources are devoted to programs for them. The district administration has instilled the attitude that students with limited English proficiency can meet high expectations and are the responsibility of all teachers. To help maintain the focus on these students, the district has created an English language development progress profile to help teachers track the progress of each student in acquiring English and meeting the state’s English language development standards. In addition, officials in 4 of our study states attributed their success in working with students with limited English proficiency to using data strategically, for example, to identify effective practices and guide instruction. At one California school we visited, officials reviewed test scores to identify areas needing improvement for different classes and different student groups and to identify effective practices. In addition, they reviewed test data for each student to identify areas of weakness. If test data showed that a student was having trouble with vocabulary, the teacher would work in class to build the student’s vocabulary. Similarly, officials in a New York school reported that they followed student test scores over 3 years to set goals for different student groups and identify areas in need of improvement. Officials in 3 states we visited also cited the importance of involving parents of students with limited English proficiency in their children’s education. In Nebraska, for example, a technical assistance agency implemented a family literacy program to help parents and their children improve their English, and also to involve parents in their children’s education. The program showed parents how they can help children with their homework and the importance of reading to their children in their native language to develop their basic language skills. At a New York middle school, officials told us that they use a parent coordinator to establish better communication with families, learn about issues at home that can affect the student’s academic performance, and help families obtain support services, if needed. For academic assessments in language arts and mathematics, officials in the 5 states we studied reported that they have taken some steps, such as reviewing test items to eliminate unnecessarily complex language, to address the specific challenges associated with assessing students with limited English proficiency. However, Education recently reviewed the assessment documentation of 38 states and noted some concerns related to using these assessments for students with limited English proficiency. Our group of experts also indicated that states are generally not taking the appropriate set of comprehensive steps to create assessments that produce valid and reliable results for students with limited English proficiency. To increase the validity and reliability of assessment results for this population, most states offered accommodations, such as providing extra time to complete the assessment and offering native language assessments. However, offering accommodations may or may not improve the validity of test results, as research on the appropriate use of accommodations for these students is lacking. In addition, native language assessments are not appropriate for all students with limited English proficiency and are difficult and expensive to develop. Officials in the 5 states we studied reported taking some steps to address the specific challenges associated with assessing students with limited English proficiency in language arts and mathematics. Officials in 4 of these states reported following generally accepted test development procedures when developing their academic assessments, while a Nebraska official reported that the state expects districts to follow such procedures when developing their tests. Test development involves a structured process with specific steps; however, additional steps and special attention to language issues are required when developing a test that includes students with limited English proficiency to ensure that the results are valid and reliable for these students. As the Standards for Educational and Psychological Testing notes, for example, the test instructions or the response format may need to be modified to ensure that the test provides valid information about the skills of students with limited English proficiency. Officials in 2 states and at several testing companies mentioned that they have been focusing more on the needs of these students in recent years. Officials in California, New York, North Carolina, and Texas told us that they try to implement the principles of universal design, which support making assessments accessible to the widest possible range of students. This is done by ensuring, among other things, that instructions, forms, and questions are clear and not more linguistically complex than necessary. In addition, officials in all 5 states we studied told us they included students with limited English proficiency in the field testing of assessments. North Carolina officials reported that they oversample for students with limited English proficiency to ensure that these students are adequately represented in the field tests. Another step officials in some states reported taking is assembling panels or committees to review test items for bias and testing data for bias related to a student’s English proficient status. For example, Texas and North Carolina officials reported creating review committees to ensure that test items are accessible to students with limited English proficiency. Specifically, when developing mathematics items, these states try to make the language as clear as possible to ensure that the item is measuring primarily mathematical concepts and to minimize the extent to which it is measuring language proficiency. A mathematics word problem involving subtraction, for example, might refer to fish rather than barracuda. Officials in 4 of our study states told us they used a statistical approach to evaluate test items for bias against specific student groups, and three of these reported using it to detect bias related to students with limited English proficiency. However, this type of analysis can only be used when a relatively large number of students in the specific group is taking the test. Members of our expert group recommended the use of this technique for states with a large enough number of students with limited English proficiency; however, one member noted that this technique may not be appropriate if a state’s population of students with limited English proficiency is diverse but is treated as homogenous in the analyses. Some of our study states also reported including experts on limited English proficiency or English as a second language (ESL) issues in the development and review of test items, although only 1 reported involving them in all aspects of test development. In North Carolina, for example, officials told us that ESL teachers and supervisors are involved in reviewing all aspects of the test development process, including item writing, field testing, and operational testing. Some state officials also told us that they included education staff involved with students with limited English proficiency in the development of assessments. Education’s recent NCLBA peer reviews of 38 states found that 25 did not provide sufficient evidence on the validity or reliability of results for students with limited English proficiency, although states have been required to include these students in their assessments since 1994. For example, peer reviewers found that Alabama’s documentation did not include sufficient evidence on the selection process for committee members to review test items for bias, noting that no evidence was provided on whether representatives for students with limited English proficiency were included. In Idaho, peer reviewers commented that the state did not report reliability data for student groups, including students with limited English proficiency. See table 5 for further examples. Our group of experts indicated that states are generally not taking the appropriate set of comprehensive steps to create assessments that produce valid and reliable results for students with limited English proficiency and identified essential steps that should be taken. The group noted that no state has implemented an assessment program for students with limited English proficiency that is consistent with the Standards for Educational and Psychological Testing and other technical standards. Specifically, the group said that students with limited English proficiency are not defined consistently within and across states, which is a crucial first step to ensuring the reliability of test results. A reliable test should produce consistent results, so that students achieve similar scores if tested repeatedly. If the language proficiency levels of students with limited English proficiency are classified inconsistently, an assessment may produce results that appear inconsistent because of the variable classifications rather than actual differences in skill levels. One expert noted, for example, that some studies have shown that a student’s language proficiency plays a small role in determining whether a student is classified as limited English proficient. Inconsistency in defining these students may be due to variation in how school districts apply state definitions. For example, according to a 2005 study on students with limited English proficiency in California, state board of education guidelines suggest that districts consider a student’s performance on the state’s English language proficiency assessment and on the state’s language arts test, a teacher evaluation of the student’s academic performance, and parental recommendations when determining if a student should or should not continue to be considered limited English proficient. However, the study noted that districts interpreted and applied these factors differently. Further, it appears that many state assessment programs do not conduct separate analyses for different groups of limited English proficient students. Our group of experts indicated that the reliability of a test may be different for heterogeneous groups of students with limited English proficiency, such as students who are literate in their native language and those who are not. Our group of experts also noted that states are not always explicit about whether an assessment is attempting to measure skills only (such as mathematics) or mathematics skills as expressed in English. According to the group, a fundamental issue affecting the validity of a test is the definition of what is being measured. Members of the group emphasized that approaches to ensure valid test results should vary based on which of these is being measured. For example, North Carolina officials stated that the state did not offer native language assessments because the state has explicitly chosen to measure student knowledge in English. The expert group emphasized that determining the validity and reliability of academic assessments for students with limited English proficiency is complicated and requires a comprehensive collection of evidence rather than a single analysis or review. As one expert noted, “you can’t just do one thing and assume things are valid.” In addition, the appropriate combination of analyses will vary from state to state, depending on the characteristics of the student population and the type of assessment. For example, because reliability of test results can vary based on a student’s English proficiency status or a student’s native language, states with more diverse groups of limited English proficient students may need to conduct additional analyses to ensure sufficient reliability. The group indicated that states are not universally using all the appropriate analyses to evaluate the validity and reliability of test results for students with limited English proficiency. Instead, our experts noted that states vary in terms of the particular techniques they use for this purpose, and in the extent to which they collect valid background data. Members indicated that some states may need assistance to conduct appropriate analyses that will offer useful information about the validity of their academic assessments for these students. Finally, our group of experts indicated that reducing language complexity is essential to developing valid assessments for these students, but expressed concern that some states and test developers do not have a strong understanding of universal design principles or how to use them to develop assessments from the beginning to eliminate language that is not relevant to measuring a student’s knowledge of, for example, mathematics. Members believed that some states may need more information on how to implement these principles to develop assessments that produce valid results for students with limited English proficiency. The majority of states offered some accommodations to try to increase the validity and reliability of assessment results for students with limited English proficiency. These accommodations are intended to permit students with limited English proficiency to demonstrate their academic knowledge, despite their limited language ability. Our review of state Web sites found available documentation on accommodations for 42 states. The number of accommodations offered varied considerably among states. One state, for example, offered students with limited English proficiency the use of a bilingual dictionary and a side-by-side English-Spanish version of its grade 10 mathematics test. Another state listed over 40 acceptable accommodations, including clarifying test directions in English or the student’s native language, offering extra time, and providing responses (written or oral) in the student’s native language. Our review found that the most common accommodations offered by these states were allowing the use of a bilingual dictionary and reading test items aloud in English (see table 6). In addition, they offered other accommodations intended to create a less distracting environment for students, such as administering the assessment to the student in a small group or individually. Some states also gave students with limited English proficiency extra time to complete a test to account for their slower reading speed and information processing time in English. The 5 states we studied varied in how they established and offered accommodations to students. For example, Texas officials reported working with its limited English proficiency focus group to develop a list of allowable accommodations, which may be offered on a test when they are routinely used by students in their classrooms. In addition, each school district has a committee to select particular accommodations based on the needs of individual students. California officials told us the state provides guidance to districts on the appropriate use of accommodations. However, they said that districts might not provide approved accommodations because of high administrator turnover. According to our expert group and our review of the relevant literature, research is lacking on what specific accommodations are appropriate for students with limited English proficiency, as well as their effectiveness in improving the validity of assessment results. A 2004 review of state policies found that few studies focus on accommodations intended to address the linguistic needs of students with limited English proficiency or on how accommodations affect the performance of students with limited English proficiency. In contrast, significantly more research has been conducted on accommodations for students with disabilities, much of it funded by Education. Because of this research disparity, our group of experts reported that some states offer accommodations to students with limited English proficiency based on those they offer to students with disabilities, without determining their appropriateness for individual students. Our experts noted the importance of considering individual student characteristics to ensure that an accommodation appropriately addresses the needs of the student. Other researchers have raised similar issues about the use of accommodations by states. Education’s peer reviews of state academic assessments identified issues related to accommodations for students with limited English proficiency in all 38 states reviewed. For example, the reviewers noted that South Dakota does not clearly indicate whether students with limited English proficiency were provided accommodations that they do not regularly use in the classroom. If an accommodation is not used regularly in the classroom, it may not improve the validity of test results because the student may not be comfortable with a new procedure. In addition, they noted that South Dakota does not appear to be monitoring the use of accommodations and suggested that the state study accommodations to ensure that they are instructionally appropriate and that they improve the validity and reliability of the results. In Texas, the reviewers noted that the state needs to provide information regarding the quality and consistency of accommodations for students with limited English proficiency— specifically whether the state routinely monitors the use of accommodations for these students. In North Carolina, they noted a lack of evidence that the state has performed research on accommodations. Although conducting such research could provide useful information on the validity of accommodated tests, having each state individually study accommodations could be financially burdensome for them. While research on accommodations for this population would be useful, it does not have to be conducted directly by states to be applicable to a state’s student population. Further, such research could involve short-term studies, rather than large-scale, longitudinal efforts. In our survey, 16 states reported that they offered statewide native language assessments in language arts or mathematics in some grades for certain students with limited English proficiency in the 2004-2005 school year. For example, New York translated its statewide mathematics assessments into Spanish, Chinese, Russian, Korean, and Haitian-Creole. In addition, 3 states were developing or planning to develop a native language assessment, and several states allowed school districts to translate state assessments or offer their own native language assessments. Our group of experts told us that this type of assessment is difficult and costly to develop. An assessment provided in a student’s native language is intended to remove language barriers students face in demonstrating their content knowledge and thereby improve the validity of test results. Of the 16 states that offered statewide native language assessments, 4 were able to provide complete data on the number of students taking native language assessments. These data indicated that relatively few students took these assessments. Our group of experts and some state officials also described the challenges of developing native language assessments that produce valid results. Members of our expert group and other experts told us that native language assessments are generally an effective accommodation only for students in specific circumstances, such as students who are instructed in their native language or are literate in their native language. In addition, our experts emphasized that developing valid native language assessments is challenging, time-consuming, and expensive. Development of a valid native language assessment involves more than a simple translation of the original test; in most situations, a process of test development and validation similar to that of the nontranslated test is recommended to ensure the validity of the test. In addition, the administration of native language assessments may not be practicable, for example, when only a small percentage of limited English students in the state speak a particular language or when a state’s student population has many languages. Thirteen states offered statewide alternate assessments (such as reviewing a student’s classroom work portfolio) in 2005 for certain students with limited English proficiency, based on our review of accountability plans for all states and the District of Columbia as of March 2006. We also found that 4 additional states allowed school districts to offer alternate assessments, while 7 states and the District of Columbia planned to offer alternate assessments. An official in Wisconsin told us that the state administers an alternate assessment because developing a native language assessment for its relatively small Spanish-speaking population would be impractical and the state does not have bilingual programs in the second most common language, Hmong (a language that is native to Southeast Asia). However, our group of experts noted that alternate assessments are difficult and expensive to develop, and may not be feasible because of the amount of time required for such an assessment. Members of the group also expressed concern about the extent to which these assessments are objective and comparable and can be aggregated with regular assessments. See figure 4 for information on which states offered native language or alternate assessments for students with limited English proficiency. With respect to English language proficiency assessments, many states implemented new tests to address NCLBA requirements, and are working to align them with newly required state English language proficiency standards. State and consortia officials reported that states are using assessments or test items developed by state consortia, customized assessments developed by testing companies, state-developed assessments, and off-the-shelf assessments. While a few states already had the required English language proficiency assessments in place, many states are implementing them for the first time in spring 2006; as a result, evidence on their validity and reliability may not be fully developed. Many states implemented new English language proficiency assessments for the 2005-2006 school year to meet Education’s requirement for states to administer English language proficiency tests that meet NCLBA requirements by the spring of 2006. These assessments must allow states to track student progress in learning English; in addition, Education requires that these assessments be aligned to a state’s English language proficiency standards. According to Education and test development officials, prior to NCLBA, most states used off-the-shelf English language proficiency assessments to determine the placement of students in language instruction programs, but these assessments did not have to be aligned with standards. Education officials said that because many states did not have tests that met NCLBA requirements, the agency funded four state consortia to develop new assessments that were to be aligned with state standards and measure student progress. Officials in some states told us they have chosen to use these consortium-developed tests, while officials in other states reported developing their own tests or continuing to use off-the-shelf tests. Some states had only recently determined what test they are going to administer this year, while others may administer a new test in the 2006-2007 school year. Education officials noted that states’ decisions on these tests have been in flux during this transition year. In the 2005-2006 school year, 22 states used assessments or test items developed by one of four state consortia, making this the most common approach taken by states to develop new English language proficiency assessments. Each of the four consortia varied somewhat in its development approach. For example, officials in two consortia reported that they examined all their member states’ English language proficiency standards and reached consensus on core standards for use on the English language proficiency assessments. They also planned to continue working with member states in implementing their assessments. For example, one consortium plans to provide ongoing professional development to help educators understand the consortium’s standards. In contrast, officials in the other two consortia reported that the consortia disbanded after developing their assessments. One state official told us that the state hired a contractor to customize the consortium-developed assessment to more closely align with state standards. In addition, officials in other states, such as New Mexico, told us they are using a combination of consortium- developed test items, along with items developed by another test developer. Fifteen states participated in one of the consortia, but officials in these states told us they chose not to use the assessments developed by the consortia in the 2005-2006 school year for a variety of reasons, including lack of alignment with state standards, the length of the assessment, and the cost of implementation. For example, Kentucky chose not to use the consortium assessment because of cost effectiveness concerns and lack of alignment with state standards. Another state decided not to use the consortium-developed assessment, as officials were concerned about its cumbersome nature and associated cost. Officials in some states told us they plan to use consortium-developed assessments in the future. For example, Florida officials reported that the state will use a consortium assessment in the 2006-2007 school year. Appendix V shows the states that participated in the consortia and which used consortia-developed assessments in the 2005-2006 school year. Officials in states that did not use consortia assessments told us that they used other approaches to develop their English language proficiency assessments. Eight states worked with test developers to augment off-the- shelf English language proficiency assessments to incorporate state standards. For example, Mississippi, South Dakota, and Wyoming are using versions of an English language proficiency assessment that has been augmented to align to their respective state standards. Officials in 14 states indicated that they are administering off-the-shelf assessments. These officials indicated varying degrees of alignment between the off-the- shelf tests being used and their state’s English language proficiency standards; in 11 of these states, the assessment has not been fully aligned with state standards. Seven states, including Texas, Minnesota, and Kansas, created their own English language proficiency assessments. Officials in these states said they typically worked with a test developer or research organization to create the assessments. See figure 5 and appendix VI for more detailed information on the English language proficiency assessments used by each state. Some officials in our 5 study states and 28 additional states we contacted to determine what English language proficiency assessment they planned to use in 2006 pointed to some challenges involving their English language proficiency assessments. Some of these state officials expressed concerns about using both their old and new English language proficiency assessments to measure student progress in learning English. NCLBA required states to begin tracking student progress in the 2002–2003 school year, before most states had implemented their new English language proficiency assessments. In May 2006, Education officials told us that states must rely on baseline results from their old tests and determine how results from their old tests relate to results from their new tests in order to track student progress since 2003, as required by NCLBA. They noted that states may change their English language proficiency goals based on results from their new assessments, but they cannot change the initial baseline established with their old test. In its technical comments on this report, Education noted that it allows states to make such determinations in a variety of ways, as long as annual progress is reported. Officials in some states want to rely solely on data from their new tests to track student progress. They stated that, unlike their old tests, their new tests provide more accurate data on student progress because they are aligned to their English language proficiency standards and were designed to measure student progress. Officials from other states questioned the usefulness of conducting studies to determine the relationship between their old and new tests, especially in states that had previously used multiple English language proficiency assessments. Officials in a few of our study states also expressed concern about the appropriateness of NCLBA’s requirement to assess students with limited English proficiency in kindergarten and the first and second grades. For example, Texas officials told us traditional tests do not produce good test results for students this young in part because of their limited attention spans. In addition, officials in Texas and North Carolina noted that English proficient students in these grades are not required to be assessed in the same way. Officials in our study states and test developers we interviewed reported that they commonly apply generally accepted test development procedures in the development of English language proficiency assessments, but some are still in the process of documenting the validity and reliability of these assessments. For example, some evidence needed to confirm the validity and reliability of the test can be obtained only after the assessment has been fully administered. One consortium contracted with a research organization to assess the validity and reliability testing of its English language proficiency assessment. According to a consortium official, the research organization performed all of the standard steps that are taken to ensure high-quality assessments. These included piloting and field testing the assessment and conducting statistical modeling. An official from another consortium said that its test vendor is conducting basic psychometric research and analyzing field test data for evidence of reliability. California officials noted that the process for developing and ensuring the validity and reliability of its English language proficiency assessment is similar to that used for its state academic assessments. Although states have taken steps toward determining validity, documenting the validity and reliability of a new assessment is an ongoing process. A 2005 review of the documentation of 17 English language proficiency assessments used by 33 states in the 2005-2006 school year found that the evidence presented on validity and reliability was generally insufficient. The report, which was funded by Education, reviewed documentation for consortium-developed assessments, off-the-shelf assessments, and custom-developed assessments for evidence of validity, reliability, and freedom from test bias, among other things. It found that the technical adequacy of English language proficiency assessments is undeveloped compared to the adequacy of assessments for general education. The study noted that none of the assessments contained “sufficient technical evidence to support the high-stakes accountability information and conclusions of student readiness they are meant to provide.” In addition, many states are in the process of aligning these assessments to state English language proficiency standards, which in turn must be aligned to state content standards. These steps are needed to comply with NCLBA requirements. Alignment, which refers to the degree to which an assessment’s items measure the content they are intended to measure, is critical in assuring the validity of an assessment. Officials in some states have expressed uncertainty about how to align their English language proficiency test with their standards for academic subjects, such as mathematics and science. Officials in 2 states told us that their English language proficiency assessments are aligned to state language arts standards but are not aligned to state mathematics standards, meaning that the assessment may not measure the language needed to succeed in a mathematics class. Findings from Education’s Title III monitoring reviews of 13 states indicated that 8 states had not yet fully completed alignment; of these, 5 had not yet linked their English language proficiency and academic content standards, while 5 had not yet aligned their English language proficiency assessments with their English language proficiency standards. Education has offered states a variety of technical assistance to help them appropriately assess students with limited English proficiency, such as providing training and expert reviews of their assessment systems, as well as flexibility in assessing these students. However, Education has issued little written guidance on how states are expected to assess and track the English proficiency of these students, leaving state officials unclear about Education’s expectations. To support states’ efforts to incorporate these students into their accountability systems, Education has offered states some flexibilities in how they track progress goals for these students. However, many of the state and district officials we interviewed told us that the current flexibilities do not fully account for some characteristics of certain students in this student group, such as their lack of previous schooling. These officials indicated that additional flexibility is needed to ensure that the federal progress measures accurately track the academic progress of these students. Education offers support in a variety of ways to help states meet NCLBA’s requirements for assessing students with limited English proficiency for both their language proficiency and their academic knowledge. Some of these efforts focus specifically on students with limited English proficiency, while others, such as the Title I monitoring visits, focus on all student groups and on broader compliance issues but review some assessment issues related to students with limited English proficiency as part of their broader purposes. The agency’s primary technical assistance efforts have included the following: Title I peer reviews of states’ academic standards and assessment systems: Education is currently conducting peer reviews of the academic assessments that states use in measuring adequate yearly progress. During these reviews, three independent experts review evidence provided by the state about the validity and reliability of these assessments (including whether the results are valid and reliable for students with limited English proficiency) and make recommendations to Education about whether the state’s assessment system is technically sufficient and meets all legal requirements. Education shares information from the peer review to help states address issues identified during the review. Education has imposed a deadline requiring that states receive peer review approval by June 30, 2006, but only 10 states have had their assessment systems fully approved by Education as of that date. Title III monitoring visits: Education began conducting site visits to review state compliance with Title III requirements in 2005 and has visited 15 states. Education officials reported that they plan to visit 11 more states in 2006. As part of these visits, the agency reviews the state’s progress in developing English language proficiency assessments that meet NCLBA requirements. Comprehensive centers: Education has contracted with 16 regional comprehensive centers to build state capacity to help districts that are not meeting their adequate yearly progress goals. The grants for these centers were awarded in September 2005, and the centers provide a broad range of assistance, focusing on the specific needs of individual states. At least 3 of these centers plan to assist individual states in developing appropriate goals for student progress in learning English. In 2005, Education also funded an assessment and accountability comprehensive center, which provides technical assistance to the regional comprehensive centers on issues related to the assessment of students, including those with limited English proficiency. Ongoing technical assistance for English language proficiency assessments: Education has provided information and ongoing technical assistance to states using a variety of tools and has focused specifically on the development of the English language proficiency standards and assessments required by NCLBA. These include: a semiannual review of reports states submit to Education and phone calls to state officials focused on state progress in developing their English language proficiency assessments; on-site technical assistance to states regarding their English language an annual conference focused on students with limited English proficiency that includes sessions on assessment issues, such as aligning English language proficiency and academic content standards; videoconference training sessions for state officials on developing English language proficiency assessments; providing guidance on issues related to students with limited English proficiency on its Web site; distributing information through an electronic bulletin board and a weekly electronic newsletter focused on students with limited English proficiency; disseminating information through the National Clearinghouse for English Language Acquisition and Language Instruction Educational Programs; semiannual meetings and training sessions with state Title III directors; and responding to questions from individual states as needed. Enhanced Assessment Grants: Since 2003, Education has awarded these grants, authorized by NCLBA, to support state activities designed to improve the validity and reliability of state assessments. According to an Education official, most of the grants up to now have funded the English language proficiency consortia, although some grants have been used to conduct research on accommodations. For grants to be awarded in 2006, Education will give preference to projects involving accommodations and alternate assessments intended to increase the validity of assessments for students with limited English proficiency and students with disabilities. Title I monitoring visits: As part of its monitoring visits to review state compliance with Title I requirements, Education reviews some aspects of the academic assessments administered by states, but in less detail than during its peer reviews. During these visits, for example, states may receive some feedback on how the state administers academic assessments to students with limited English proficiency and the appropriateness of accommodations offered to these students. Education staff also reported that they respond to questions about Title I requirements from individual states as needed. While providing states with a broad range of technical assistance and guidance through informal channels, Education has issued little written guidance on developing English language proficiency assessments that meet NCLBA’s requirements and on tracking the progress of students in acquiring English. Education issued some limited nonregulatory guidance on NCLBA’s basic requirements for English language proficiency standards and assessments in February 2003. However, officials in about one-third of the 33 states we visited or directly contacted expressed uncertainty about implementing these requirements. They told us that they would like more specific guidance from Education to help them develop tests that meet NCLBA requirements, generally focusing on two issues. First, some officials said they were unsure about how to align English language proficiency standards with content standards for language arts, mathematics, and science, as required by NCLBA. An official in 1 state said the state needed specific guidance on what Education wants from these assessments, such as how to integrate content vocabulary on the English language proficiency assessment without creating an excessively long test. In another state, officials explained that the state was developing its English language proficiency test by using an off-the-shelf test and incorporating additional items to align the test with the state’s English language proficiency and academic standards. However, the state discovered that it had not correctly augmented the test and will consequently have to revise the test. Officials in this state noted that they have had to develop this test without a lot of guidance from Education. Second, some officials reported that they did not know how to use the different scores from their old and new English language proficiency assessments to track student progress. For example, an official in 1 state said that she would like guidance from Education on how to measure student progress in English language proficiency using different tests over time. Another official was unsure if Education required a formal study to correlate the results from their old and new English language proficiency assessments, noting that more specific guidance would help them better understand Education’s requirements. Without guidance and specific examples on both of these issues, some of these officials were concerned that they will spend time and resources developing an assessment that may not meet Education’s requirements. Education officials told us that they are currently developing additional nonregulatory guidance on these issues, but it has not been finalized. They also pointed out that they have provided extensive technical assistance on developing English language proficiency standards and assessments, and have clearly explained the requirements to state officials at different meetings on multiple occasions. An Education official acknowledged that states were looking for more guidance on the degree of alignment required between their English language proficiency assessments and standards, noting that Education is still considering the issue. She stated that the issue would be addressed in the guidance it plans to issue in the future. With respect to academic content assessments, our group of experts reported that some states could use more assistance in creating valid academic assessments for students with limited English proficiency. While 4 of the 5 states we studied in depth had significant experience in, and multiple staff devoted to, developing language arts and mathematics assessments, some members of our expert group pointed out that the assessment departments in other states have limited resources and expertise, as well as high turnover. As a result, these states need help to conduct appropriate analyses that will offer useful information about the validity and reliability of their academic assessments for students with limited English proficiency. An Education official told us that the agency recently began offering technical assistance to states that need help addressing issues raised during their peer reviews. Our group of experts suggested several areas where states could benefit from additional assistance and guidance in developing academic assessments for students with limited English proficiency. Several members noted the lack of good research on what kinds of accommodations can help mitigate language barriers for students with limited English proficiency. Several experts also believed that some states need more information on how to implement universal design principles to develop assessments that produce valid results for students with limited English proficiency. In addition, some group members pointed out that developing equivalent assessments in other languages (that is, assessments that measure the same thing and are of equivalent difficulty) is challenging and that states need more information about how to develop such assessments, as well as examples. Education has offered states several flexibilities in tracking academic progress goals for students with limited English proficiency to support their efforts to develop appropriate accountability systems for these students. In a February 2004 notice, Education recognized the existence of language barriers that hinder the assessment of students who have been in the country for a short time and provided some testing flexibility for these students. Specifically, Education does not require students with limited English proficiency to participate in a state’s language arts assessment during their first year in U.S. schools. In addition, while these students must take a state’s mathematics assessment during their first year in U.S. schools, a state may exclude their scores in determining whether it met its progress goals. Education offered additional flexibility in its February 2004 notice, recognizing that limited English proficiency is a more transient quality than having a disability or being of a particular race. Unlike the other NCLBA student groups, students who achieve English proficiency leave the group at the point when they are more prepared to demonstrate their academic knowledge in English, while new students with lower English proficiency are constantly entering the group (see fig. 6). Given the group’s continually changing composition, meeting progress goals may be more difficult than doing so for other student groups, especially in districts serving large numbers of students with limited English proficiency. To compensate for this, Education allowed states to include, for up to 2 years, the scores of students who were formerly classified as limited English proficient when determining whether a state met its progress goals for students with limited English proficiency. In addition, Education has approved requests from several states to permit students who have been redesignated as English proficient to remain in the group of students with limited English proficiency until they have achieved the proficient level on the state’s language arts assessment for 1 or more years. Several state and local officials in our study states told us that additional flexibility would be helpful to ensure that the annual progress measures provide meaningful information about the performance of students with limited English proficiency. Officials in 4 of the states we studied suggested that certain students with limited English proficiency should be exempt for longer periods from taking academic content assessments or that their test results should be excluded from a state’s annual progress determination for a longer period than is currently allowed. Several officials voiced concern that some of these students have such poor English skills or so little previous school experience that the assessment results do not provide any meaningful information. Instead, some of these officials stated that students with limited English proficiency should not be included in academic assessments until they demonstrate appropriate English skills on the state’s English language proficiency assessment. However, the National Council of La Raza, an Hispanic advocacy organization, has voiced concern that excluding too many students with limited English proficiency from a state’s annual progress measures will allow some states and districts to overlook the needs of these students. Education officials reported that they are developing a regulation with regard to how test scores for this student group are included in a state’s annual progress measures, but it has not yet been finalized. With respect to including the scores of students previously classified as limited English proficient in a state’s progress measures for this group for up to 2 years, officials in 2 of our 5 study states, as well as one member of our expert group, thought it would be more appropriate for these students to be counted in the limited English proficient group throughout their school careers—but only for accountability purposes. They pointed out that by keeping students formerly classified as limited English proficient in the group, districts that work well with these students would see increases in the percentage who score at the proficient level in language arts and mathematics. An Education official explained that the agency does not want to label these students as limited English proficient any longer than necessary and considered including test results for these students for 2 years after they have achieved English proficiency to be the right balance. Education officials also noted that including all students who were formerly limited English proficient would inflate the achievement measures for the student group. District officials in 4 of the states we studied argued that tracking the progress of individual students in this group is a better measure of how well these students are progressing academically. Officials in one district pointed to a high school with a large percentage of students with limited English proficiency that had made tremendous progress with these students, doubling the percentage of students achieving academic proficiency. The school missed the annual progress target for this group by a few percentage points, but school officials said that the school would be considered successful if it was measured by how much individual students had improved in their test scores. A district official in another state explained that many students with limited English proficiency initially have very low test scores, but demonstrate tremendous improvement in these scores over time. In response to educators and policymakers who believe such an approach should be used for all students, Education initiated a pilot project in November 2005, allowing a limited number of states to incorporate measures of student progress over time in determining whether districts and schools met their annual progress goals. Even using this approach, however, states must still establish annual goals that lead to all students achieving proficient scores by 2014. NCLBA has focused attention on the academic performance of all students, especially those who have historically not performed as well as the general student population, such as students with limited English proficiency. NCLBA requires states to include these students in their language arts and mathematics assessments and to assess them in a valid and reliable manner, and states are in various stages of doing so. Although Education has provided some technical assistance to states, our group of experts and others have noted the complexity of developing academic assessments for these students and have raised concerns about the technical expertise of states to ensure the validity and reliability of assessment results. Using assessment results that are not a good measure of student knowledge is likely to lead to poor measures of state and district progress, thereby undermining NCLBA’s purpose to hold schools accountable for student progress. Further, although most states offered these students accommodations, research on their appropriateness is limited. National research on accommodations has informed states’ practices in assessing students with disabilities. Without similar research efforts, accommodations offered to students with limited English proficiency may not improve the validity of their test results. While Education has provided some support and training to states, officials in a number of states are still uncertain about how to comply with some of the more technical requirements of the new English language proficiency assessments required by NCLBA. State officials reported that they need more guidance from Education to develop these assessments. States have had to develop many new assessments under NCLBA for both English language proficiency and academic content, and some states may lack the technical expertise to develop assessments that produce valid results for students with limited English proficiency. Without more specific guidance outlining Education’s requirements, states may spend time developing English language proficiency assessments that do not adequately track student progress in learning English or otherwise meet NCLBA’s requirements. Including students with limited English proficiency in NCLBA’s accountability framework presents unique challenges. For example, students who have little formal schooling may make significant progress in learning academic skills, but may not achieve proficiency on state academic assessments for several years. The movement of students into and out of the group also makes it more difficult for the group to meet state progress goals, even when these students are making academic progress. Education has addressed some of the unique characteristics of this student group and provided some flexibility in how states and districts are held accountable for the progress of these students. However, these current flexibilities may not fully account for the characteristics of certain students with limited English proficiency, such as those who have little previous formal schooling. We recommend that the Secretary of Education 1. Support additional research on appropriate accommodations for students with limited English proficiency and disseminate information on research-based accommodations to states. 2. Determine what additional technical assistance states need with respect to assessing the academic knowledge of students with limited English proficiency and to improve the validity and reliability of their assessment results (such as consultations with assessment experts and examples of assessments targeted to these students) and provide such additional assistance. 3. Publish additional guidance with more specific information on the requirements for assessing English language proficiency and tracking the progress of students with limited English proficiency in learning English. 4. Explore ways to provide additional flexibilities to states in terms of holding states accountable for students with limited English proficiency. For example, among the flexibilities that could be considered are allowing states to include the assessment scores for all students formerly considered to have limited English proficiency in a state’s annual progress results for the group of students with limited English proficiency, extending the period during which the assessment scores for some or all students with limited English proficiency would not be included in a state’s annual progress results, and adjusting how states account for recent immigrants with little formal schooling in their annual progress results. We provided a draft of this report to Education for review and comment. The agency provided comments, which are reproduced in appendix VII. Education also provided technical clarifications, which we incorporated when appropriate. Education agreed with our first three recommendations. The department noted that it has conducted some research on the effectiveness of accommodations and is currently working with its National Research and Development Center for Assessment and Accountability to synthesize the existing research literature on the assessment of students with limited English proficiency. Education also explained that it has begun the process of identifying the additional technical assistance needs of states with respect to academic assessments; specifically, it will have its Assessment and Accountability Comprehensive Center conduct a needs assessment this fall to determine specific areas in which states need assistance and will provide technical assistance to address those areas. In addition, the department stated that it is exploring ways to help states assess English language proficiency. Education did not explicitly agree or disagree with our fourth recommendation. Instead, the agency commented that it has explored and already provided various types of flexibility regarding the inclusion of students with limited English proficiency in accountability systems. Further, Education noted that it is in the process of completing a regulation on flexibility for these students. However, the department also emphasized that all students with limited English proficiency must be included in school accountability systems to improve both instruction and achievement outcomes. Through our recommendation, we encourage the department to continue its efforts. We are sending copies of this report to the Secretary of Education, relevant congressional committees, and other interested parties. We will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions or wish to discuss this report further, please contact me at (202) 512-7215 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and major contributors are listed in appendix VIII. On January 20, 2006, GAO, with the assistance of the National Academy of Sciences, convened a group of experts in Davis, California, to discuss issues related to assessing the academic knowledge of students with limited English proficiency. Specifically, we asked the group to discuss the following questions: To meet the requirements of the No Child Left Behind Act (NCLBA), what steps should states take to ensure the validity and reliability of language arts and mathematics assessments for students with limited English proficiency? What steps should states take to ensure that students with limited English proficiency receive appropriate accommodations on language arts and mathematics assessments? Given NCLBA’s accountability framework, what is the most appropriate way to hold schools and districts accountable for the performance of students with limited English proficiency? How can the U.S. Department of Education assist states in their efforts to meet NCLBA’s assessment and accountability requirements for students with limited English proficiency? NCLBA requires states to report adequate yearly progress (AYP) results at the state level for each of the required student groups, including students with limited English proficiency. The law also requires Education, starting in the 2004-2005 school year, to make an annual determination about whether states have made adequate yearly progress for each student group. Education has issued some general regulations regarding state- level adequate yearly progress. However, Education has not yet collected any such state-level adequate yearly progress results and has not issued any guidance on how states should determine whether a student group has made adequate yearly progress. As a result, some states have not yet made adequate yearly progress determinations for student groups at the state level. In order for a student group, such as students with limited English proficiency, to make adequate yearly progress, it must make a number of different goals. Specifically: At least 95 percent of students in the group must take the state’s language arts and mathematics assessments, and The student group must meet the progress goals established by the state for both language arts and mathematics proficiency or The percentage of students who did not achieve proficient scores must have decreased by at least 10 percent from the previous year, and the student group must also meet the progress goals established by the state for its other academic indicator (graduation rate for high schools and usually attendance rate for other schools). Figure 7 illustrates the basic decision process for determining adequate yearly progress for a student group. Because states have different assessment systems, they use different methods for determining adequate yearly progress. A state can have an assessment system that allows it to create the same progress goal for mathematics and language arts for all grades, despite using different tests in each grade. In this case, the state could review data for all students in a student group across the state to determine if the group met its annual progress goals. A state can also establish different progress goals for different grades or groups of grades, depending on the particular test being used. In this case, according to an Education official, a state would have to meet all the proficiency and participation goals for all the different grades or groups of grades in order to make adequate yearly progress. We requested district-level achievement data from 20 states, and 18 states responded to our request. When districts reported proficiency data for different grades or groups of grades, we determined that the percentage of students with limited English proficiency met a state’s mathematics progress goal if the student group met the goal for all grades reported. Results from charter schools are included when a charter school is its own school district or part of a larger school district. Hawaii only has one school district. The results for Arkansas do not include those students with limited English proficiency who were considered proficient based on the state’s portfolio assessment. The total student population includes students with limited English proficiency. Mountain West Assessment Consortium (MWAC) Pennsylvania Enhanced Assessment Grant (PA EAG) Assessing Comprehension and Communication in English State-to-State for English Language Learners (ACCESS for ELLs) English Language Development Assessment (ELDA) Comprehensive English Language Learning Assessment (CELLA) Assessing Comprehension and Communication in English State-to- State for English Language Learners (WIDA) Stanford English Language Proficiency Test MAC II (Maculaitis Assessment of Competencies) Test of English Language Proficiency California English Language Development Test LAS (Language Assessment System) Links Assessing Comprehension and Communication in English State-to- State for English Language Learners (WIDA) Assessing Comprehension and Communication in English State-to- State for English Language Learners (WIDA) Assessing Comprehension and Communication in English State-to- State for English Language Learners (WIDA) LAS (Language Assessment System) Links Mountain West Assessment Consortium test items Assessing Comprehension and Communication in English State-to- State for English Language Learners (WIDA) LAS (Language Assessment System) Links English Language Development Assessment (SCASS) Kansas English Language Proficiency Assessment 2004 IDEA Proficiency Test or Language Assessment Scales (LAS) English Language Development Assessment (SCASS) Assessing Comprehension and Communication in English State-to- State for English Language Learners (WIDA) LAS (Language Assessment System) Links English Language Proficiency Assessment(includes Mountain West Consortium test items) Test of Emerging Academic English, Minnesota Student Oral Language Observation Matrix, and checklist for reading and writing for K-2 students Stanford English Language Proficiency Test MAC II (Maculaitis Assessment of Competencies) Test of English Language Proficiency Iowa Test of Basic Skills, Woodcock-Muñoz Language Survey (English), or other state-approved test English Language Development Assessment (SCASS) LAS (Language Assessment System) Links Assessing Comprehension and Communication in English State-to- State for English Language Learners (WIDA) Assessing Comprehension and Communication in English State-to- State for English Language Learners (WIDA) New Mexico English Language Proficiency Assessment (includes Mountain West Consortium test items) New York State English as a Second Language Achievement Test 2004 IDEA Proficiency Test, Woodcock-Muñoz Language Survey (English), and Language Assessment Scales (LAS) English Language Development Assessment (SCASS) Assessing Comprehension and Communication in English State-to- State for English Language Learners (WIDA) Stanford English Language Proficiency Test Assessing Comprehension and Communication in English State-to- State for English Language Learners (WIDA) English Language Development Assessment (SCASS) Dakota English Language Proficiency assessment Comprehensive English Language Learning Assessment (PA EAG) Texas English Language Proficiency Assessment System; consists of Reading Proficiency Tests in English and Texas Observation Protocols Assessing Comprehension and Communication in English State-to- State for English Language Learners (WIDA) Stanford English Language Proficiency Test English Language Development Assessment (SCASS) Assessing Comprehension and Communication in English State-to- State for English Language Learners (WIDA) State allows school districts to individually choose tests. Harriet Ganson (Assistant Director) and Michelle St. Pierre (Analyst-in- Charge) managed all aspects of this assignment. Shannon Groff, Eileen Harrity, and Krista Loose made significant contributions to this report. Katie Brillantes contributed to the initial design of the assignment. Carolyn Boyce, John Mingus, and Lynn Musser provided key technical support, James Rebbe provided legal support, and Scott Heacock assisted in message and report development. No Child Left Behind Act:, States Face Challenges Measuring Academic Growth That Education’s Initiatives May Help Address. GAO-06-661. Washington, D.C.: July 17, 2006. No Child Left Behind Act: Improved Accessibility to Education’s Information Could Help States Further Implement Teacher Qualification Requirements. GAO-06-25. Washington, D.C.: November 21, 2005. No Child Left Behind Act: Education Could Do More to Help States Better Define Graduation Rates and Improve Knowledge about Intervention Strategies. GAO-05-879. Washington, D.C.: September 20, 2005. No Child Left Behind Act: Most Students with Disabilities Participated in Statewide Assessments, but Inclusion Options Could Be Improved. GAO-05-618. Washington, D.C.: July 20, 2005. Head Start: Further Development Could Allow Results of New Test to Be Used for Decision Making. GAO-05-343. Washington, D.C.: May 17, 2005. No Child Left Behind Act: Education Needs to Provide Additional Technical Assistance and Conduct Implementation Studies for School Choice Provision. GAO-05-7. Washington, D.C.: December 10, 2004. No Child Left Behind Act: Improvements Needed in Education’s Process for Tracking States’ Implementation of Key Provisions. GAO-04-734. Washington, D.C.: September 30, 2004. | For the Spanish translation of the highlights page for this document, see GAO-06-1111 . Ley para que ningun nino se quede atras: La ayuda del Departamento de Educacion puede contribuir a que los Estados midan mejor el progreso de los alumnos que no dominan bien el ingles. GAO-06-1111 , Julio de 2006. The No Child Left Behind Act of 2001 (NCLBA) focused attention on the academic achievement of more than 5 million students with limited English proficiency. Obtaining valid test results for these students is challenging, given their language barriers. This report describes (1) the extent to which these students are meeting annual academic progress goals, (2) what states have done to ensure the validity of their academic assessments, (3) what states are doing to ensure the validity of their English language proficiency assessments, and (4) how the U.S. Department of Education (Education) is supporting states' efforts to meet NCLBA's assessment requirements for these students. To collect this information, we convened a group of experts and studied five states (California, Nebraska, New York, North Carolina, and Texas). We also conducted a state survey and reviewed state and Education documents. For the Spanish translation of the highlights page for this document, see GAO-06-1111 . Ley para que ningun nino se quede atras: La ayuda del Departamento de Educacion puede contribuir a que los Estados midan mejor el progreso de los alumnos que no dominan bien el ingles. GAO-06-1111 , Julio de 2006. In the 2003-2004 school year, state data showed that the percentage of students with limited English proficiency scoring proficient on a state's language arts and mathematics tests was lower than the state's annual progress goals in nearly two-thirds of the 48 states for which we obtained data. Further, our review of data 49 states submitted to Education showed that in most states, these students generally did not perform as well as other student groups on state mathematics tests. Factors other than student knowledge, such as how a state establishes its annual progress goals, can influence whether states meet their goals. For their academic assessments, officials in our five study states reported taking steps to follow generally accepted test development procedures and to ensure the validity and reliability of these tests for students with limited English proficiency, such as reviewing test questions for bias. However, our group of experts expressed concerns about whether all states are assessing these students in a valid manner, noting that some states lack the resources and technical expertise to take appropriate steps to ensure the validity of tests for these students. Further, Education's peer reviews of assessments in 38 states found that 25 states did not provide adequate evidence to ensure the validity or reliability of academic test results for these students. To improve the validity of these test results, most states offer accommodations, such as a bilingual dictionary. However, our experts reported that research is lacking on what accommodations are effective in mitigating language barriers. A minority of states used native language or alternate assessments for students with limited English proficiency, but these tests are costly to develop and are not appropriate for all students. Many states are implementing new English language proficiency assessments in 2006 to meet NCLBA requirements; as a result, complete information on their validity and reliability is not yet available. In 2006, 22 states used tests developed by one of four state consortia. Consortia and state officials reported taking steps to ensure the validity of these tests, such as conducting field tests. A 2005 Education-funded technical review of available documentation for 17 English language proficiency tests found insufficient documentation of the validity of these assessments' results. Education has offered a variety of technical assistance to help states assess students with limited English proficiency, such as peer reviews of states' academic assessments. However, Education has issued little written guidance to states on developing English language proficiency tests. Officials in one-third of the 33 states we visited or directly contacted told us they wanted more guidance about how to develop tests that meet NCLBA requirements. Education has offered states some flexibility in how they assess students with limited English proficiency, but officials in our study states told us that additional flexibility is needed to ensure that progress measures appropriately track the academic progress of these students. |
The Congress has long recognized the need for the President to have flexibility in the foreign policy area. This is reflected in sections 506 and 552 of the Foreign Assistance Act of 1961, as amended. In addition, the Congress has occasionally authorized the President to initiate drawdowns for specific purposes in foreign operations appropriations acts. Section 506(a)(1) of the Foreign Assistance Act authorizes the President to “drawdown” defense articles, services, and military education and training from DOD and the military services’ inventories and provide such articles and services to foreign countries or international organizations. Before exercising this authority, the President must report to the Congress that an unforeseen emergency exists requiring immediate military assistance that cannot be met under any other law. Section 506(a)(2) of the Foreign Assistance Act authorizes the President to drawdown articles and services from the inventory and resources of any U.S. government agency and provide them to foreign countries or international organizations in a number of nonemergency situations. As above, before exercising this authority, the President must first report to the Congress that any such drawdown is in the national interests of the United States. This special authority is broad in scope, allowing the President to use drawdowns to assist with counternarcotics efforts, provide international disaster assistance and migration and refugee assistance, aid prisoner-of-war and missing-in-action efforts in Southeast Asia, supplement peacekeeping missions, and support mid- to long-term national interests in nonemergency situations. Section 552 of the Foreign Assistance Act authorizes the President to provide assistance for peacekeeping operations and other programs carried out in furtherance of U.S. national security interests. Specifically, section 552(c)(2) authorizes the President to direct the drawdown of commodities and services from the inventory and resources from any U.S. agency if the President determines that an unforeseen emergency requires the immediate provision of such assistance. At the discretion of the President, drawdown proposals are typically developed in an interagency process that generally includes DOD, the National Security Council, and State but may include other executive branch agencies. Based on the estimated price and availability of the defense articles and services, the agencies agree on the parameters of the drawdown and State prepares a justification package, including the presidential determination for the President’s signature. Once the presidential determination is approved, the Defense Security Cooperation Agency (DSCA), a component of DOD, executes the drawdown by working with the military services to determine what specific defense articles and services will be provided and who will provide them. DSCA is also charged with tracking and reporting on the drawdown status. A drawdown is typically completed when the emergency or foreign policy goal has been met or the dollar value of the authority has been reached. The excess defense articles program, which authorizes the President to transfer defense articles excess to DOD’s needs to eligible foreign countries or international organizations, is sometimes used in conjunction with drawdowns. Defense articles, including excess defense articles, that are transferred under presidential determinations authorizing drawdowns must be fully operational on delivery. The drawdown authority may be used, if necessary, to refurbish defense articles to operational status. In the 27 years from 1963 through 1989, the President approved 20 determinations authorizing drawdowns valued at a total of about $1 billion. In the 13 years since 1989, the President approved 70 determinations authorizing drawdowns valued at about $2.3 billion (see app. I). Of the 90 total drawdowns, 58 totaling about $2.1 billion were authorized under section 506 of the Foreign Assistance Act; 15 additional drawdowns valued at about $141.7 million were authorized under section 552. As shown in figure 1, drawdown authorizations as a percentage of total military assistance provided by the United States have varied considerably over the years (see also app. II). But the increased use of drawdowns in the 1990s represents a larger percentage of total annual military assistance than in any other period except during the Vietnam War. The Foreign Assistance Act of 1961, as amended, also requires that the President report to the Congress on military assistance, including drawdowns, provided to foreign recipients. Specifically, Section 506(b)(2) requires the President to keep the Congress fully and currently informed of all military assistance provided under section 506. This includes detailing all military assistance to a foreign country or international organization upon delivery of any article or upon completion of any service or education and training. Section 655 requires the President to submit an annual report to the Congress on the aggregate value and quantity of defense articles and services and military education and training activities both authorized and actually provided by the United States to each foreign recipient. The Director of DSCA is primarily responsible for preparing these reports, as delegated by the President through the Secretary of Defense. Overall, DSCA’s reports to the Congress on the status of drawdowns are inaccurate and incomplete. Its information system for tracking the status of drawdowns is outmoded, and the military services do not regularly provide DSCA updated information on the transfers they are implementing. As a result, the Congress and the executive branch do not have accurate and up- to-date information readily available to oversee and manage the assistance provided through drawdowns. DSCA uses its “1000 System” as a central repository for drawdown data. The 1000 System was designed in the late 1960s to track defense articles and services granted under the Military Assistance Program, which was discontinued in 1982. Although the Army, Air Force, and Navy compile data on the cost, type, quantity, and delivery status of defense articles and services supplied as drawdowns; each service uses a different automated system—any updates submitted to DSCA have to be converted to the 1000 System, and any coding or conversion errors have to be manually corrected. In addition, the services do not regularly report this information to DSCA. DSCA officials stated that it might take a few months to several years for the military services to report drawdown data. A March 2002 Navy memo regarding DSCA’s request for an update stated that the 1000 System was an impediment to drawdown processing. A DSCA official told us that the Navy had not provided updated information for several years. Further, although officials at the Army Security Assistance Command said that the Army was sending updates of drawdown data to DSCA on a monthly basis, agency officials told us that they were not aware of the updates. In response to specific inquiries, DSCA usually relies on its country desk officers to work with the military services to determine the defense articles and services provided and the associated costs to DOD and the services. Nevertheless, we found that this information, as well as other information that the DSCA desk officers maintain, is often not entered into the 1000 System. Our analysis of updates provided by the services and of more detailed information from our four case studies revealed numerous inaccuracies in the 1000 System and DSCA’s reports to the Congress. Four presidential determinations authorizing drawdowns totaling $17 million were not on DSCA’s list, and three presidential determinations were incorrectly identified in the 1000 System. For a 1993 drawdown to Israel, DSCA’s 1000 System reports that nothing has been delivered. In information provided to us, the Army reported that Apache and Blackhawk helicopters and services worth $272 million were provided to Israel, but indicated that its records are not clear whether the helicopters were provided as part of the 1993 drawdown. However, an Army security assistance officer in Israel during 1993 told us that the helicopter deliveries were part of the 1993 drawdown. DSCA was required to report every 60 days on the delivery and disposition of defense articles and services to Bosnia. In June 2001, in its last 60-day report to the Congress, DSCA reported that $98.3 million in defense articles and services had been provided to Bosnia. Records provided to us by the military services indicate that DSCA did not use actual costs in these reports. For the 1996 drawdown to Jordan, the President authorized the transfer of 88 M60 tanks. DSCA stated in its 1996 annual report to the Congress that 50 tanks were authorized, but did it not report whether these tanks were delivered or at what cost. In subsequent annual reports to the Congress, DSCA provided no further updates on the Jordan drawdown. According to U.S. embassy officials and the DSCA Jordan desk officer, 50 tanks were delivered in December 1996, and the remaining 38 tanks were delivered in December 1998. As recently as July 2002, the 1000 System indicated that only 5 tanks had been delivered to Jordan at a cost of $10.6 million. The Army reported that $15.5 million was the value of all 88 tanks, but this figure did not include costs for refurbishment, spare parts, and transportation. Under a 1997 drawdown to Mexico, the President authorized the transfer of 53 UH-1H helicopters, which was reported to the Congress. As with Jordan, in subsequent annual reports to the Congress, DSCA provided no further updates to the Mexico drawdown. In February 2001, DSCA closed the drawdown, with concurrence from the services involved, 3 years after the drawdown was completed and nearly 18 months after the helicopters had been returned to the United States. DSCA reported the total costs as $16.1 million including $8 million for the 53 helicopters. However, as of July 2002, the 1000 System had not recorded the transfer, much less noted the return of the helicopters. Appendix III presents the dollar value of deliveries reported in DSCA’s 1000 System compared with the dollar value shown in the military services’ reports for the 51 drawdowns authorized during fiscal years 1993–2001. Overall, the 1000 System reported the delivery of about $300 million in defense articles and services, while the military services reported $724.2 million. DSCA and the military services’ data agreed for 16 drawdowns—reporting no deliveries for 12—and differed by less than $1 million for 12 others. Of the 23 drawdowns with differences greater than $1 million, the military services generally reported significantly higher amounts. Drawdowns are an additional tool for the President to address U.S. foreign policy and national security objectives. They allow the President to provide military assistance to foreign recipients quickly because the defense articles and services are not provided through regular acquisition channels. Drawdowns also allow the United States to provide additional or improved military capability to foreign recipients. Officials from both the U.S. and recipient governments stated that the transfer of defense articles and services through drawdowns helps promote military-to-military relations. Also, DOD and State officials told us that the transfer of defense articles under drawdowns can help expand markets for U.S. defense firms. According to State officials, drawdowns allow the United States to provide assistance to foreign recipients in an emergency using DOD resources. In particular, drawdown authority has been useful in providing humanitarian assistance in the wake of natural disasters. For example, in response to a 1998 hurricane that struck Central America, the President determined that a strong U.S. response to save lives and assist in reestablishing basic infrastructure was needed. The drawdown authority allowed DOD to use existing inventory and resources for its relief efforts. The importance of the President’s ability to supply defense articles or services quickly to address a regional crisis was evidenced by a 1996 drawdown to Bosnia. The United States provided defense articles and services to the Bosnian Federation within 6 months of a July 1996 presidential determination. According to DOD and State officials, the drawdown allowed assistance to be provided more quickly and at less cost than other security assistance programs would have. The United States provided 116 fully operational 155mm howitzers as excess defense articles to help ensure the Bosnian Federation Army’s capacity to return indirect fire if attacked, which they lacked during the conflict with the Bosnian Serbs. The United States also provided 45 M60 tanks, 80 armored personnel carriers, 15 UH-1H helicopters, and light arms including 46,100 M16 rifles. These articles and related services met the force requirements for military stabilization that were approved in the Dayton Peace Agreement and enumerated in the Organization for Security and Cooperation in Europe Agreement on Sub-Regional Arms Control. According to DOD and State officials, the defense articles and services provided under the drawdown helped promote the peace and military stability of Bosnia. The drawdown authority is also useful for providing logistical assistance to regional operations, as illustrated in the following examples. In a 1999 drawdown to Kosovo, the United States supplied airlift and related services for the United Nations High Commissioner for Refugees. In a 1999 drawdown to East Timor, the United States provided transportation for peacekeepers as part of a regional multilateral operation headed by Australia. Similarly, in a 2000 drawdown for disaster assistance in southern Africa, the United States provided the logistical support for a South African-led regional multilateral disaster response force. Drawdowns are also used to support international counternarcotics operations. During fiscal years 1996–99, the United States provided defense articles and services through drawdowns to the Colombian and Mexican military and national police to increase their ability to interdict the flow of illicit narcotics to the United States. The United States provided the Colombian Army and National Police with fully operational defense articles including 7 C-26 aircraft, 12 UH-1H helicopters, and 9 patrol boats. Similarly, the United States provided Mexico with 53 UH-1H helicopters and 4 C-26 aircraft. According to State officials, although Colombia and Mexico experienced difficulty in using these articles (Mexico eventually returned the helicopters to the United States), the drawdown helped improve their capability to conduct counternarcotics operations. In the case of Colombia, the drawdown, which was implemented by State, was a way to provide arms, ammunition, and other lethal assistance to the Colombian National Police. In 1996, 1998, and 1999, three separate drawdowns were intended to help Jordan promote regional security of the Middle East. The drawdowns were initiated after Jordan signed a peace treaty with Israel in 1995 and as a result of Jordan’s subsequent role in the Wye River Peace Conference. The United States provided Jordan with 88 M60 tanks, 18 UH-1H helicopters, 38 antitank armored personnel carriers, a C-130 aircraft, a rescue boat and 2 personnel boats, 18 8-inch howitzers, and 302 air-to-air missiles. According to DOD and State officials, the defense articles that were transferred helped Jordan secure its borders. Drawdowns can help foster better military-to-military relations between the United States and foreign recipients. According to DOD and State officials, the current U.S. military-to-military relationship with Jordan is excellent, in part because of the transfer of articles and services through drawdowns. U.S. officials cited as evidence Jordan’s participation in peacekeeping operations in East Timor, Haiti, and Sierra Leone. More recently in Afghanistan, the Jordanian Armed Forces participated in demining operations and set up a field hospital that has treated over 30,000 patients, including U.S. soldiers. DOD officials also noted that U.S.– Jordanian training exercises resulted in the U.S. Marine Corps being better prepared to operate in Afghanistan. According to State officials, the transfer of defense articles under drawdowns and excess defense articles help to expand markets for U.S. defense firms. For example, the Jordanian Army signed a $38 million contract with a U.S. defense firm to refit Jordan’s M60 tanks, including the 88 tanks transferred under a 1996 drawdown, with a new 120mm gun. Jordan plans to develop its defense industrial base around this capability and make this service available to other countries in the Middle East. We found two major concerns in the current use of drawdowns that may limit the benefits of the program. The U.S. military services are not being reimbursed for the costs associated with a drawdown, and the countries that receive defense articles through drawdowns often do not have the resources to maintain and operate them. According to DOD and military service officials, the services are not reimbursed for the defense articles provided or the associated costs of drawdowns, and the articles are usually not replaced. Section 506(d) of the Foreign Assistance Act authorizes the appropriation of funds to the President to reimburse the services for the costs associated with executing drawdowns. However, since 1979, the President has not requested such reimbursements. The military services can incur six types of costs when executing a drawdown—(1) the value of the defense articles provided including aircraft, vehicles, weapons and ammunition, or other major end items; (2) the repair or refurbishment of these items; (3) spare parts and tools; (4) training; (5) packing, crating, handling, and transportation; and (6) administrative costs. The cost of defense articles charged against a drawdown is a depreciated value and not necessarily the replacement cost. The other costs of a drawdown are typically paid out of a service’s operations and maintenance account and are not budgeted or planned for in advance. In effect, this means that the services have less operations and maintenance funding for other items in their inventories. Information provided by the services shows that unreimbursed costs associated with drawdowns have totaled about $724.2 million since 1993. The Army reported about $557 million in unreimbursed costs, and the Air Force and Navy reported $69.4 million and $97.8 million, respectively. Case by case, unreimbursed costs ranged from less than $100 to approximately $87.2 million. A large proportion of these costs were for refurbishing the defense articles, providing spare parts and support equipment, and transporting the articles. For example, the Army reported that it spent approximately $31.4 million from its operations and maintenance account to refurbish and deliver $55.8 million worth of articles for the 1996 drawdown to Bosnia. Similarly, the Army spent $23.8 million for spare parts and transportation from its operations and maintenance account on $51.5 million worth of articles for the 1996 drawdown to Jordan. However, this figure did not include refurbishment. Numerous DOD and service officials stated that the unreimbursed costs associated with a drawdown negatively affect the readiness of the U.S. military services. However, these officials could not provide any examples of programs forgone or specific deficiencies in unit readiness. In 1996, we reported that Army operations and maintenance costs exceeded funding for contingency operations as a result, in part, of Army expenditures on the 1996 drawdown to Bosnia. In addition, A July 1996 memorandum from the Chief of Staff of the Army to the Chairman of the Joint Chiefs of Staff stated that drawdowns affect the Army’s ability to respond to contingencies. It also stated that defense articles for future drawdowns would have to be taken from war reserve stocks or from reserve components. In other documents since 1996, the Army characterized the unbudgeted expenditures from operations and maintenance accounts in support of drawdowns as a drain on its readiness, training, transformation activities, and quality-of-life funds and as a long-term risk to the stability of Army investments. Furthermore, in 2000, the military services reported to DSCA on the effect on readiness of drawdowns for counternarcotics efforts. Generally, the services characterized the effect as dollars spent on unplanned contingencies and, therefore, not available to support other requirements. In their responses to DSCA, The Army stated that it expected readiness to be adversely affected by the diversion of $8 million worth of Blackhawk helicopter spare parts for Colombia, but it did not say whether any specific helicopter unit would be affected. Subsequently, the Joint Staff directed the Army to provide the parts to Colombia under a 1999 drawdown. The Air Force noted that it would need to replace several utility vehicles transferred under drawdown authority, but it did not specify when or at what cost these vehicles would be replaced or the effect on readiness of no longer having the vehicles. In 1985, we reported that even if DOD and the military services were reimbursed for the costs associated with drawdowns, full replacement was unlikely, if not impossible. This is because, among other reasons, the replacement cost of an article may have increased more than the depreciated value charged against the drawdown or been replaced by a newer (and more expensive) item. According to DOD officials, drawdowns are successful over the long term only if the foreign recipient has the ability to support the defense articles or if the United States provides additional funding for maintenance. Drawdowns typically provide for 1 or 2 years of essential spare parts for aircraft, vehicles, and weapons, but many recipients do not have the resources to support the defense articles after that. In addition, because defense articles delivered under drawdowns are often older articles, the spare parts and tools needed to maintain them may not be readily available. Consequently, the recipients’ ability to conduct military or police missions in support of U.S. foreign policy diminishes as vehicles and weapons break down and as parts for these older defense articles become more difficult to obtain. Each of our case studies provided examples of problems with the long-term sustainability of the defense articles provided through drawdowns. Bosnia. According to officials from the Bosnian Federation Ministry of Defense and DOD, the Bosnian Federation Army does not have enough of its own funds, and does not receive enough assistance from the United States, to maintain the vehicles and weapons it received in the 1996 drawdown. Bosnia has received less than $6 million per year in financing since 1996 to support the defense articles. However, Bosnian Federation Ministry of Defense officials stated that they need approximately $10 million per year just for spare parts and fuel. These officials noted that, as of May 2002, the readiness of the Federation units had significantly deteriorated and that the operational rates were below 35 percent for the helicopters and below 60 percent for the tanks. Colombia. In 1998, we reported that a 1996 counternarcotics drawdown to Colombia was hastily developed and did not consider sufficient information on specific Colombian requirements—including Colombia’s ability to operate and maintain the articles. For example, 2 months after Colombia received 12 UH-1H helicopters, the Colombian National Police reported that only 2 were operational. The U.S. embassy estimated the cost of the repairs at about $1.2 million. As part of the same drawdown, the United States transferred 5 C-26 aircraft to conduct counternarcotics surveillance missions. According to U.S. embassy officials, the United States spent at least an additional $3 million to modify each aircraft to perform the surveillance missions, and it costs at least $1 million annually to operate and maintain each aircraft. Mexico. In 1996 and 1997, the United States provided the Mexican military with 73 UH-1H helicopters—20 from a 1996 excess defense articles transfer and 53 from a 1997 drawdown—and 2 years of spare parts to assist Mexico in its counternarcotics efforts. As we reported in 1998, the usefulness of the U.S.-provided helicopters was limited because the helicopters were inappropriate for some counternarcotics missions and lacked adequate logistical support. At the time, U.S. embassy officials were concerned that once the U.S.-provided support had been used, the Mexican military would not provide the additional support—estimated at $25 million per year for the UH-1H fleet—because of budgetary constraints. In March 1999, 72 UH-1H helicopters (one crashed) were grounded because of overuse and airworthiness concerns. Shortly thereafter, Mexico transferred the 72 helicopters back to the United States for repair and ended its involvement in the helicopter program. Jordan. Although Jordan has allocated $16 million of U.S. aid per year for sustainment and modernization since 2000, it cannot fully use all of the defense articles it has received through drawdowns. For example, the Jordanian Air Force cannot get all the necessary spare parts from DOD’s logistics system for its UH-1H’s helicopters; as of May 2002, only 20 of 36 helicopters were operational. In addition, Jordan does not have funds to purchase additional munitions for some of the weapons it received from the drawdowns. As a result, the Jordanian Army and Air Force have never test fired the air-to-air missiles or the antitank missiles it received. Furthermore, according to U.S. military officials in Jordan, the shelf life of some of the other munitions and light weapons ammunition used for training purposes may be expiring, and Jordan does not have the funds to replace them. Drawdowns give the President the ability to provide defense articles, training, and services to foreign countries and international organizations without first seeking specific appropriations from the Congress. In making this accommodation, the Congress has required that the President regularly report on the use of these special authorities. However, DSCA’s system for collecting information on the status of drawdowns is outmoded and does not readily permit DSCA to meet the reporting requirements to the Congress. While DSCA can respond to ad hoc inquiries about specific drawdowns, a way to systematically track and accurately report on the status of drawdowns does not currently exist. As a result, neither the Congress nor the executive branch has complete and accurate information about the status of defense articles and services provided to foreign recipients through drawdowns. In light of the increased use of drawdowns since 1990, the need for such information has increased accordingly. To help ensure that the Congress has accurate and complete information on the use of drawdowns, we recommend that the Secretary of Defense, in consultation with the Director of DSCA and the Secretaries of the military services, develop a system that will enable DSCA to report to the Congress on the cost, type, quantity, and delivery status of defense articles and services transferred to foreign recipients through drawdowns, as required. DOD provided written comments on a draft of this report (see app. IV). The Department of State had no comments. DOD concurred with our recommendation, but stated that DSCA is dependent on the military services for specific drawdown cost and delivery information and is not funded to support this administrative reporting requirement. We note that the Secretary of Defense has the authority to require regular and timely reporting by the services and believe that DOD should provide DSCA the necessary resources to fully implement our recommendation. DSCA also provided certain technical clarifications that we have incorporated as appropriate. Overall, to examine the use of drawdown authorities, we focused on the special authorities granting the President the ability to provide military assistance in emergency situations and in the U.S. national interests for the purposes of international counternarcotics control. We selected four countries—Bosnia-Herzegovina, Colombia, Jordan, and Mexico—as case studies to analyze specific costs, benefits, and problems associated with the drawdowns. Bosnia and Jordan represent examples of the use of drawdowns in an emergency situation to help stabilize their respective regions, and Colombia and Mexico represent examples of U.S. assistance in the national interest for counternarcotics efforts. To determine whether the costs to DOD and the status of drawdowns are reported to the Congress, as required, we analyzed relevant DSCA and military services’ reports and documentation and addressed this issue with cognizant DSCA, military services, and State officials. Specifically, we compared DSCA’s list of presidential determinations authorizing drawdowns to presidential determinations published in the Federal Register and drawdown reports from the military services; analyzed DSCA’s cost and delivery data for the drawdowns from fiscal years 1993–2001 by comparing it with data collected from the military services; and compared information that we obtained from the DSCA country desk officers with information from U.S. embassy officials in the case study countries to determine the status of specific drawdowns, including deliveries and costs. We also reviewed the Foreign Assistance Act of 1961, as amended, to determine the relevant reporting requirements. To determine how the drawdowns benefit the United States and foreign recipients and what concerns, if any, are associated with the programs, we focused primarily on the four case study countries. We analyzed relevant DSCA, military services, and State documentation. We visited Bosnia and Jordan and met with U.S. embassy and host country officials, including officials in the host country ministries of defense and military services, and reviewed relevant documentation. We met with the cognizant officials of the unified military commands for Bosnia, Colombia, and Jordan. In Washington, D.C., we met with DSCA country desk officers and officials from DSCA’s Comptroller’s Office and General Counsel’s Office; the U.S. military service’s respective security assistance offices; and the Office of the Joint Chiefs of Staff, Directorate for Strategic Plans and Policy. We also met with cognizant officials in the Department of State’s Bureau for Political and Military Affairs and the Bureau for International Narcotics and Law Enforcement Affairs. We conducted our work between November 2001 and August 2002 in accordance with generally accepted government auditing standards. We will send copies of this report to the interested congressional committees and the Secretaries of Defense and State. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please call me at (202) 512-4268 or contact me at [email protected]. An additional GAO contact and staff acknowledgments are listed in appendix V. Table 1 lists the 90 presidential determinations that have authorized $3.3 billion in drawdowns since fiscal year 1963. The first drawdown authorized military assistance for India; the most recent authorized counterterrorism assistance for the Philippines in June 2002. Over the years, over 55 countries and other organizations such as the United Nations have been authorized U.S. military assistance through drawdowns. Israel was authorized to receive the most military assistance with nine drawdowns totaling approximately $923 million during the early and mid- 1990s. South Vietnam was second with drawdown authority totaling $375 million under two presidential determinations in 1965 and 1966. Cambodia was third with drawdown authority totaling $325 million under presidential determinations in 1974 and 1975. The frequency of presidential determinations has increased since 1990. During fiscal years 1961–89, 20 presidential determinations authorized a total of about $1 billion in drawdowns. Since 1990, 70 presidential determinations authorized $2.3 billion in drawdowns. As shown in table 2, 58 drawdowns totaling approximately $2.1 billion were authorized under section 506 of the Foreign Assistance Act, which allows the President to authorize assistance for unforeseen military emergencies, counternarcotics, counterterrorism, and disaster relief. Of the remaining 32 drawdowns, 16 drawdowns totaling approximately $1.1 billion were authorized under various foreign operations acts to support activities in the national interest, including efforts to locate servicemen listed as prisoners of war and missing in action in Southeast Asia; 15 drawdowns totaling $141.7 million were authorized specifically for peacekeeping-related operations (section 552 of the Foreign Assistance Act); and 1 drawdown totaling $5 million was authorized under the Iraq Liberation Act of 1998 for training Iraqi opposition organizations. Table 3 illustrates that drawdown authorizations have been used more frequently in the 1990s, as shown in appendix I. It also shows that the military assistance authorized by presidential determinations has more than tripled as a percentage of overall U.S. military assistance, averaging over 4.6 percent a year during fiscal years 1990–2001 compared with 1.3 percent for the previous 29 years (fiscal years 1961–89). At least one drawdown has been authorized every year since fiscal year 1986, with 10 each in fiscal years 1996 and 1999. In 2002, five drawdowns had been authorized through June—primarily for counterterrorism purposes. We analyzed cost and delivery data from DSCA’s 1000 System and compared it with similar information provided by the services for the 51 drawdowns authorized during fiscal years 1993–2001. Table 4 illustrates the differences in the reported value of defense articles and services delivered. Overall, the 1000 System reported about $300 million in drawdown transfers while the military services reported $724.2 million. Of the 51 drawdowns, DSCA and the military services’ data agreed for 16, including 12 with no reported deliveries, and differed by less than $1 million for 12 others. Of the 23 drawdowns with differences greater than $1 million, the military services generally reported significantly higher amounts. We did not attempt to determine the reasons for the differences in reporting. For example, DSCA reported no costs for a drawdown to Israel (93-17) while the Army reported $272 million. However, Army officials noted that they were not certain if the transfers it reported were specifically for the drawdown. DSCA reported costs of $5.8 million for a drawdown to Mexico (97-09) while the services reported $19.5 million. DSCA reported costs of $16.5 million for a drawdown to Jordan (98-19) while the services reported $33 million. In addition to the above named individual, Allen Fleener, Ronald Hughes, James Strus, and Jason Venner made key contributions to this report. Lynn Cothern, Ernie Jackson, and Reid Lowe provided technical assistance. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | Since 1961, the President has had special statutory authority to order the "drawdown" of defense articles--such as aircraft, vehicles, various weapons, and spare parts--and services or military education and training from Department of Defense (DOD) and military service inventories and transfer them to foreign countries or international organizations. Drawdowns give the President the ability to respond to U.S. foreign policy and national security objectives, such as counternarcotics efforts, peacekeeping needs, and unforeseen military and nonmilitary emergencies, by providing military assistance without first seeking additional legislative authority or appropriations from Congress. The Defense Security Cooperation Agency's reports to Congress on the costs and delivery status of drawdowns are inaccurate and incomplete. Two principal problems contribute to the agency's inability to meet the reporting requirements. First, its information system for recording drawdown data is outmoded and difficult to use--service drawdown reports are in different formats, and any conversion errors have to be manually corrected. Second, the services do not regularly provide updates to the agency on drawdown costs and deliveries, and available information sometimes does not get into the system. Drawdowns benefit the United States and foreign recipients primarily by providing the President the flexibility to address foreign policy and national security objectives quickly. Drawdowns also allow the President to provide defense articles and services to improve foreign recipients' capability to conduct military and police missions in support of U.S. foreign policy. Other benefits cited include improved military-to-military relations between the U.S. military services and the foreign recipients and expanded markets for U.S. defense firms. According to U.S. and foreign military officials, the use of drawdowns presents some concerns. Because drawdowns are used to quickly address U.S. national interests and emergencies, the costs associated with a drawdown, such as refurbishment and transportation, are not budgeted for by the services and are not reimbursed. |
Transportation programs, like other federal programs, need to be viewed in the context of the nation’s fiscal position. Long-term fiscal simulations by GAO, the Congressional Budget Office, and others all show that despite a 3-year decline in the federal government’s unified budget deficit, we still face large and growing structural deficits driven by rising health care costs and demographic trends. As the baby boom generation retires, entitlement programs will grow and require increasing shares of federal spending. Absent significant changes to tax and spending programs and policies, we face a future of unsustainable deficits and debt that threaten to cripple our economy and quality of life. This looming fiscal crisis requires a fundamental reexamination of all government programs and commitments. Although the long-term outlook is driven by rising health care costs, all areas of government should be re-examined. This involves reviewing government programs and commitments and testing their continued relevance and relative priority for the 21st century. Such a reexamination offers an opportunity to address emerging needs by eliminating outdated or ineffective programs, more sharply defining the federal role in relation to state and local roles, and modernizing those programs and policies that remain relevant. We are currently working with Congress to develop a variety of tools to help carry out a reexamination of federal programs. The nation’s surface transportation programs are particularly ready for reexamination. This would include asking whether existing program constructs and financing mechanisms are relevant to the challenges of the 21st century, and making tough choices in setting priorities and linking resources to results. We have previously reported on the following factors that highlight the need for transformation of the nation’s transportation policy. Future demand for transportation will strain the network. Projected population growth, technological changes, and increased globalization are expected to increase the strain on the nation’s transportation system. Congestion across modes is significant and projected to worsen. National transportation goals and priorities are difficult to discern. Federal transportation statutes and regulations establish multiple, and sometimes conflicting, goals and outcomes for federal programs. In addition, federal transportation funding is generally not linked to system performance or to the accomplishment of goals or outcomes. Furthermore, the transportation program, like many other federal programs, is subject to congressional directives, which could impede the selection of merit-based projects. The federal government’s role is often indirect. The Department of Transportation (DOT) implements national transportation policy and administers most federal transportation programs. While DOT carries out some activities directly, it does not have control over the vast majority of the activities it funds. Additionally, DOT’s framework of separate modal administrations makes it difficult for intermodal projects to be integrated into the transportation network. Future transportation funding is uncertain. Revenues to support the Highway Trust Fund—the major source of federal highway and transit funding—are eroding. Receipts for the Highway Trust Fund, which are derived from motor fuel and truck-related taxes (e.g., truck sales) are continuing to grow. However, the federal motor fuel tax of 18.4 cents per gallon has not been increased since 1993, and thus the purchasing power of fuel tax revenues has eroded with inflation. Furthermore, that erosion will continue with the introduction of more fuel-efficient vehicles and alternative-fueled vehicles in the coming years, raising the question of whether fuel taxes are a sustainable source of financing transportation. In addition, funding authorized in the recently enacted highway and transit program legislation is expected to outstrip the growth in trust fund receipts. Finally, the nation’s long-term fiscal challenges constrain decision makers’ ability to use other revenue sources for transportation needs. Recognizing many of these challenges and the importance of the transportation system to the nation, Congress established The National Surface Transportation Policy and Revenue Study Commission (Commission) in the Safe, Accountable, Flexible, Efficient Transportation Equity Act—A Legacy for Users (SAFETEA-LU). The mission of the Commission was, among other things, to examine the condition and future needs of the nation’s surface transportation system and short and long- term alternatives to replace or supplement the fuel tax as the principal revenue source to support the Highway Trust Fund. In January 2008, the Commission released a report with numerous recommendations to place the trust fund on a sustainable path and to reform the current structure of the nation’s surface transportation programs. Congress also created the National Surface Transportation Infrastructure Financing Commission in SAFETEA-LU and charged it with analyzing future highway and transit needs and the finances of the Highway Trust Fund and recommending alternative approaches to financing transportation infrastructure. This Commission issued its interim report this past week, and its final report is expected by spring of 2009. In addition, various transportation industry associations and research groups have issued, or plan to issue in the coming months, proposals for restructuring and financing the surface transportation program. Through our prior analyses of existing programs, we identified a number of principles that could help drive an assessment of proposals for restructuring the federal surface transportation programs. These principles include (1) defining the federal role based on identified areas of national interest, (2) incorporating performance and accountability for results into funding decisions, and (3) ensuring fiscal sustainability and employing the best tools and approaches to improve results and return on investment. Our previous work has shown that identifying areas of national interest is an important first step in any proposal to restructure the surface transportation program. In identifying areas of national interest, proposals should consider existing 21st century challenges and how future trends could have an impact on emerging areas of national importance—as well as how the national interest and federal role may vary by area. For example, experts have suggested that federal transportation policy should recognize emerging national and global imperatives, such as reducing the nation’s dependence on foreign fuel sources and minimizing the impact of the transportation system on global climate change. Once the various national interests in surface transportation have been identified, proposals should also clarify specific goals for federal involvement in the surface transportation program as well as define the federal role in working toward each goal. Goals should be specific and outcome-based to ensure that resources are targeted to projects that further the national interest. The federal role should be defined in relation to the roles of state and local governments, regional entities, and the private sector. Where the national interest is greatest, the federal government may play a more direct role in setting priorities and allocating resources as well as fund a higher share of program costs. Conversely, where the national interest is less evident, state and local governments, and others could assume more responsibility. For example, efforts to reduce transportation’s impact on greenhouse gas emissions may warrant a greater federal role than other initiatives, such as reducing urban congestion, since the impacts of greenhouse gas emissions are widely dispersed, whereas the impacts of urban congestion may be more localized. The following illustrative questions can be used to determine the extent to which proposals to restructure the surface transportation program define the federal role in relation to identified areas of national interest and goals. To what extent are areas of national interest clearly defined? To what extent are areas of national interest reflective of future trends? To what extent are goals defined in relation to identified areas of national interest? To what extent is the federal role directly linked to defined areas of national interest and goals? To what extent is the federal role defined in relation to the roles of state and local governments, regional entities, and the private sector? To what extent does the proposal consider how the transportation system is linked to other sectors and national policies, such as environmental, security, and energy policies? Our previous work has shown that an increased focus on performance and accountability for results could help the federal government target resources to programs that best achieve intended outcomes and national transportation priorities. Tracking specific outcomes that are clearly linked to program goals could provide a strong foundation for holding grant recipients responsible for achieving federal objectives and measuring overall program performance. In particular, substituting specific performance measures for the current federal procedural requirements could help make the program more outcome-oriented. For example, if reducing congestion were an established federal goal, outcome measures for congestion, such as reduced travel time could be incorporated into the programs to hold state and local governments responsible for meeting specific performance targets. Furthermore, directly linking the allocation of resources to the program outcomes would increase the focus on performance and accountability for results. Incorporating incentives or penalty provisions into grants can further hold grantees and recipients accountable for achieving results. The following illustrative questions can be used to determine the extent to which proposals to restructure the surface transportation program incorporate performance and accountability mechanisms. Are national performance goals identified and discussed in relation to state, regional, and local performance goals? To what extent are performance measures outcome-based? To what extent is funding linked to performance? To what extent does the proposal include provisions for holding stakeholders accountable for achieving results? To what extent does the proposal create data collection streams and other tools as well as a capacity for monitoring and evaluating performance? We have previously reported that the effectiveness of any overall federal program design can be increased by incorporating strategies to ensure fiscal sustainability as well as by promoting and facilitating the use of the best tools and approaches to improve results and return on investment. Importantly, given the projected growth in federal deficits, constrained state and local budgets, and looming Social Security and Medicare spending commitments, the resources available for discretionary programs will be more limited—making it imperative to maximize the national public benefits of any federal investment through a rigorous examination of the use of such funds. The federal role in transportation funding must be reexamined to ensure that it is sustainable in this new fiscal reality. A sustainable surface transportation program will require targeted investment, with adequate return on investment, from not only the federal government, but also state and local governments, and the private sector. The user-pay concept—that is, users paying directly for the infrastructure they use—is a long-standing aspect of transportation policy and should, to the extent feasible and appropriate, remain an essential tenet as the nation moves toward the development of a fiscally sustainable transportation program. For example, a panel of experts recently convened by GAO agreed that regardless of funding mechanisms pursued, investments need to seek to align fees and taxes with use and benefits. A number of specific tools and approaches can be used to improve results and return on investment including using economic analysis, such as benefit-cost analysis in project selection; requiring grantees to conduct post-project evaluations; creating incentives to better utilize existing infrastructure; providing states and localities greater flexibility to use certain tools, such as tolling and congestion pricing; and requiring maintenance of effort provisions in grants. The suitability of the tool and approach used varies depending on the level of federal involvement or control that policymakers desire for a given area of policy. Using these tools and approaches could help surface transportation programs more directly address national transportation priorities and become more fiscally sustainable. The following illustrative questions can be used to determine the extent to which proposals to restructure the surface transportation program ensure fiscal sustainability and employ the best tools and approaches to improve results and return on investment. To what extent do the proposals reexamine current and future spending on surface transportation programs? Are the recommendations affordable and financially stable over the long- term? To what extent are the recommendations placed in the context of federal deficits, constrained budgets, and other spending commitments and to what extent do they meet a rigorous examination of the use of federal funds? To what extent do the proposals discuss how costs and revenues will be shared among federal, state, local, and private stakeholders? To what extent are recommendations considered in the context of trends that could affect the transportation system in the future, such as population growth, increased fuel efficiency, and increased freight traffic? To what extent do the proposals build in capacity to address changing national interests? To what extent do the proposals address the need better to align fees and taxes with use and benefits? To what extent are efficiency and equity tradeoffs considered? To what extent do the proposals provide flexibility and incentives for states and local governments to choose the most appropriate tool in the toolbox? The Commission makes a number of recommendations designed to restructure the federal surface transportation program so that it meets the needs of the nation in the 21st century. The recommendations include significantly increasing the level of investment by all levels of government in surface transportation, consolidating and reorganizing the current programs, speeding project delivery, and making the current programs more performance- and outcome-based and mode-neutral, among other things. We are currently analyzing the Commission’s recommendations using the principles that we have developed for evaluating proposals to restructure the surface transportation program. Although our analysis is not complete, our preliminary results indicate that some of the Commission’s recommendations address issues included in the principles. For example, to make the surface transportation program more performance-based, the Commission recommends the development of outcome-based performance standards for various programs. Other recommendations, however, appear to be aligned less clearly with the principles. In its report, the Commission identifies eight areas of national interest and recommends organizational restructuring of DOT to eliminate modal stovepipes. In particular, the report notes that the national interest in transportation is best served when (1) facilities are well maintained, (2) mobility within and between metropolitan areas is reliable, (3) transportation systems are appropriately priced, (4) modes are rebalanced and travel options are plentiful, (5) freight movement is explicitly valued, (6) safety is assured, (7) transportation decisions and resource impacts are integrated, and (8) rational regulatory policy prevails. We and others have also identified some of these and other issues as possible areas of national interest for the surface transportation program. For example, at a recent forum on transportation policy convened by the Comptroller General, experts identified enhancing the mobility of people and goods, maintaining global competitiveness, improving transportation safety, minimizing adverse environmental impacts of the transportation system, and facilitating transportation security as the most important transportation policy goals. The Commission report also recommends restructuring DOT to consolidate the current programs and to eliminate modal stovepipes. We have also identified the importance of breaking down modal stovepipes. Specifically, we have reported that the modal structure of DOT and state and local transportation agencies can inhibit the consideration of a range of transportation options and impede coordination among the modes. Furthermore, in the forum on transportation policy, experts told us that the current federal structure, with its modal administrations and stovepiped programs and funding, frequently inhibits consideration of a range of transportation options at both the regional and national levels. Some of the Commission’s recommendations related to the national interest and the federal role also raise questions for consideration. Although consolidating and reorganizing the existing surface transportation programs, as the Commission recommends, could help eliminate modal stovepipes, it is not clear to what extent eliminating any of the existing programs was considered. Given the federal government’s fiscal outlook, we have reported that we cannot accept all of the federal government’s existing programs, policies, and activities as “givens.” Rather, we have stated that we need to rethink existing programs, policies, and activities by reviewing their results relative to the national interests and by testing their continued relevance and relative priority. It is not clear from the Commission’s report that such a “zero-based” review of the current and proposed surface transportation programs took place. The Commission also recommends an 80/20 cost sharing arrangement for transportation projects under most programs—that is, the federal government would fund 80 percent of the project costs and the grantee (e.g., state government) would fund 20 percent. In addition, the Commission recommends that the federal government should pay 40 percent of national infrastructure capital costs. These proposed cost share arrangements suggest that the recommended level and share of federal funding reflects the benefits the nation receives from investment in the project—that is, the national interest. However, the report offers no evidence that this is the case. Rather, the proposed cost share arrangements appear to reflect the historical funding levels of many surface transportation programs without considering whether this level of funding reflects the national interest or should vary by program or project. For example, the Commission recommends that the federal government pay for 80 percent of the proposed intercity passenger rail system. However, we have found that the nation’s intercity passenger rail system appears to provide limited public benefits for the level of federal expenditures required to operate it, raising questions as to whether an 80 percent federal share is justified. The Commission proposes to make the surface transportation program performance- and outcome-based, and its recommendations include several performance and accountability mechanisms. In particular, the Commission recommends the development of national outcome-based performance standards for the different federal programs. The Commission recommends that states and major metropolitan areas also be required to include performance measures in their own transportation plans, along with time frames for meeting national performance standards. To receive federal funding, projects must be listed in state and local plans, be shown to be cost-beneficial, and be linked to specific performance targets. In addition, the Commission recognizes the importance of data in measuring the effectiveness of transportation programs and overall project performance and recommends that an important goal of the proposed research, development, and technology program be to improve the nation’s ability to measure project performance data. Although the Commission emphasizes the need for a performance- and outcome-based program, it is unclear to what extent some of the Commission’s recommendations are aligned with such principles. For example, the Commission recommends that overall federal funding be apportioned to states based on state and local transportation plans, rather than directly linking the distribution of funds to state and local governments’ performance in meeting identified national transportation goals. In addition, although the Commission recognizes the importance of data in evaluating the effectiveness of projects, the Commission does not recommend the use of post-project, or outcome, evaluations. Our previous work has shown that post-project evaluations provide an opportunity to learn from the successes and shortcomings of past projects to better inform future planning and decision making and increase accountability for results. The Commission recommends a range of financing mechanisms and tools as necessary components of a fiscally sustainable transportation program. These mechanisms include an increase in the federal fuel tax, investment tax credits, and the introduction of new fees, such as a new fee on freight and a new transit ticket tax. Experts at our forum on transportation policy also advocated the use of various financing mechanisms, including many of the mechanisms recommended by the Commission, arguing that there is no “silver bullet” for the current and future funding crisis facing the nation’s transportation system. The Commission also recognizes that states will need to use other tools to generate revenues for their share of the recommended increase in investment and to manage congestion. Therefore, the Commission supports fewer federal restrictions on tolling and congestion pricing on the interstate highways system and recommends that Congress encourage the use of public-private partnerships where appropriate. In addition, the Commission recognizes the growing consensus that, with more fuel-efficient and more alternative- fuel vehicles, an alternative to the fuel tax will be required in the next 15 to 20 years. To facilitate a transition to new revenue sources, the Commission recommends that Congress require a study of specific mechanisms, such as mileage-based user fees. It is unclear, however, whether some of the Commission’s recommendations are fiscally sustainable—both over the short and the long-term—and encourage the use of the best tools and approaches. For example, the Commission recommends a substantial investment— specifically, $225 billion per year—in the surface transportation program by all stakeholders. However, the level of investment called for by the Commission reflects the most expensive “needs” scenario examined by the Commission, raising questions about whether this level of investment is warranted and whether federal, state, and local governments can generate their share of the investment in light of competing priorities and fiscal constraints. In addition, while much of the increased investment in the surface transportation program would come from increased fuel taxes and other user fees, some funding would come from general revenues. Such recommendations need to be considered in the context of the overall fiscal condition of the federal government. Finally, while the Commission recommends enhanced opportunities for states to implement alternative tools such as tolling, congestion pricing, and public-private partnerships, it also recommends that Congress place a number of restrictions on the use of these mechanisms, such as requirements that states cap toll rates (at the level of the CPI minus a productivity adjustment), prohibit the use of revenues for non-transportation purposes, avoid toll rates that discriminate against certain users, and fully consider the effect tolling might have on diverting traffic to other facilities. The potential federal restrictions must be carefully crafted to avoid undermining the potential benefits. In conclusion, the magnitude of the nation’s transportation challenges calls for an urgent response, including a plan for the future. The Commission’s report offers one way forward. Over the coming months, other options to restructure and finance the surface transportation program will likely be put forward by a range of transportation stakeholders. Ultimately, Congress and other federal policymakers will have to determine which option—or which combination of options—best meets the needs of the nation. There is no silver bullet solution to the nation’s transportation challenges and many of the options, such as reorganizing a large federal agency or allowing greater private sector investment in the nation’s infrastructure, could be politically difficult to implement both nationally and locally. The principles that we identified provide a framework for evaluation. Although the principles do not prescribe a specific approach to restructuring, they do provide key attributes that will help ensure that a restructured surface transportation program addresses current challenges. We will continue to assist the Congress as it works to evaluate the various options and develop a national transportation policy for the 21st century that will improve the design of transportation programs, the delivery of services, and accountability for results. Madam Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee might have. For further information on this statement, please contact JayEtta Z. Hecker at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony were Elizabeth Argeris, Nikki Clowers, Barbara Lancaster, Matthew LaTour, Nancy Lueke, and Katherine Siggerud. Long-Term Fiscal Outlook: Action Is Needed to Avoid the Possibility of a Serious Economic Disruption in the Future. GAO-08-411T. Washington, D.C.: January 29, 2008. Freight Transportation: National Policy and Strategies Can Help Improve Freight Mobility. GAO-08-287. Washington, D.C.: January 7, 2008 A Call For Stewardship: Enhancing the Federal Government’s Ability to Address Key Fiscal and Other 21st Century Challenges. GAO-08-93SP. Washington, D.C.: December 2007. Highlights of a Forum: Transforming Transportation Policy for the 21st Century. GAO-07-1210SP. Washington, D.C.: September 2007. Public Transportation: Future Demand Is Likely for New Starts and Small Starts Programs, but Improvements Needed to the Small Starts Application Process. GAO-07-917. Washington, D.C.: July 27, 2007. Surface Transportation: Strategies Are Available for Making Existing Road Infrastructure Perform Better. GAO-07-920. Washington, D.C.: July 26, 2007. Highway and Transit Investments: Flexible Funding Supports State and Local Transportation Priorities and Multimodal Planning. GAO-07-772. Washington, D.C.: July 26, 2007. Railroad Bridges and Tunnels: Federal Role in Providing Safety Oversight and Freight Infrastructure Investment Could Be Better Targeted. GAO-07-770. Washington, D.C.: August 6, 2007. Intermodal Transportation: DOT Could Take Further Actions to Address Intermodal Barriers. GAO-07-718. Washington, D.C.: June 20, 2007. Performance and Accountability: Transportation Challenges Facing Congress and the Department of Transportation. GAO-07-545T. Washington, D.C.: March 6, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Fiscal Stewardship: A Critical Challenge Facing Our Nation. GAO-07- 362SP. Washington, D.C.: January 2007. Intercity Passenger Rail: National Policy and Strategies Needed to Maximize Public Benefits from Federal Expenditures. GAO-07-15. Washington, D.C.: November 13, 2006. Freight Railroads: Industry Health Has Improved, but Concerns about Competition and Capacity Should be Addressed. GAO-07-94. Washington, D.C.: October 6, 2006. Highway Finance: States’ Expanding Use of Tolling Illustrates Diverse Challenges and Strategies. GAO-06-554. Washington, D.C.: June 28, 2006. Highway Trust Fund: Overview of Highway Trust Fund Estimates. GAO-06-572T. Washington, D.C.: April 4, 2006. Highway Congestion: Intelligent Transportation Systems’ Promise for Managing Congestion Falls Short, and DOT Could Better Facilitate Their Strategic Use. GAO-05-943. Washington, D.C.: September 14, 2005. Freight Transportation: Short Sea Shipping Option Shows Importance of Systematic Approach to Public Investment Decisions. GAO-05-768. Washington, D.C.: July 29, 2005. Highlights of an Expert Panel: The Benefits and Costs of Highway and Transit Investments. GAO-05-423SP. Washington, D.C.: May 6, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 2005. Highway and Transit Investments: Options for Improving Information on Projects’ Benefits and Costs and Increasing Accountability for Results. GAO-05-172. Washington, D.C.: January 24, 2005. Federal-Aid Highways: Trends, Effect on State Spending, and Options for Future Program Design. GAO-04-802. Washington, D.C.: August 31, 2004. Surface Transportation: Many Factors Affect Investment Decisions. GAO-04-744. Washington, D.C.: June 30, 2004. Highways and Transit: Private Sector Sponsorship of and Investment in Major Projects Has Been Limited. GAO-04-419. Washington, D.C.: March 25, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The nation has reached a critical juncture with its current surface transportation policies and programs. Demand has outpaced the capacity of the system, resulting in increased congestion. In addition, without significant changes in funding mechanisms, revenue sources, or planned spending, the Highway Trust Fund--the major source of federal highway and transit funding--is projected to incur significant deficits in the years ahead. Furthermore, the nation is on a fiscally unsustainable path. Recognizing many of these challenges and the importance of the transportation system to the nation, Congress established The National Surface Transportation Policy and Revenue Study Commission (Commission) to examine current and future needs of the system and recommend needed changes to the surface transportation program, among other things. The Commission issued its report in January 2008. This testimony discusses 1) principles to assess proposals for restructuring the surface transportation program and 2) GAO's preliminary observations on the Commission's recommendations. This statement is based on GAO's ongoing work for the Ranking Member of this Committee, the Chairman of the House Transportation and Infrastructure Committee, Senator DeMint, as well as a body of work GAO has completed over the past several years for Congress. GAO has called for a fundamental reexamination of the nation's surface transportation program because, among other things, the current goals are unclear, the funding outlook for the program is uncertain, and the efficiency of the system is declining. A sound basis for reexamination can productively begin with identification of and debate on underlying principles. Through prior analyses of existing programs, GAO identified a number of principles that could help drive an assessment of proposals for restructuring the federal surface transportation program. These principles include (1) defining the federal role based on identified areas of national interest, (2) incorporating performance and accountability for results into funding decisions, and (3) ensuring fiscal sustainability and employing the best tools and approaches to improve results and return on investment. GAO developed these principles based on prior analyses of existing surface transportation programs as well as a body of work that GAO developed for Congress, including its High-Risk, Performance and Accountability, and 21st Century Challenges reports. The principles do not prescribe a specific approach to restructuring, but they do highlight key attributes that will help ensure that a restructured surface transportation program addresses current challenges. In its report, the Commission makes a number of recommendations for restructuring the federal surface transportation program. The recommendations include significantly increasing the level of investment by all levels of government in surface transportation, consolidating and reorganizing the current programs, speeding project delivery, and making the current program more performance- and outcome-based and mode-neutral, among other things. GAO is currently analyzing the Commission's recommendations using the principles that GAO developed for evaluating proposals for restructuring the surface transportation program. Although this analysis is not complete, GAO's preliminary results indicate that some of the Commission's recommendations appear to be aligned with the principles, while others may not be aligned. For example, although the Commission identifies areas of national interest and recommends reorganizing the individual surface transportation programs around these areas, it generally recommends that the federal government pay for 80 percent of project costs without considering whether this level of funding reflects the national interest or should vary by program or project. |
Before presenting additional preliminary results, I would like to provide some information on our scope and methodology. Specifically, we are interviewing key OWCP and Postal Service officials in Washington, D.C., to discuss and collect pertinent information regarding the employees’ claims for WCP eligibility and for compensation for lost wages and schedule awards. Additionally, we collected and reviewed a total of 483 Postal Service employee WCP case files located at the 12 OWCP district offices throughout the country. For the 12-month period beginning July 1, 1997, we randomly selected the claims and obtained case file records for injuries that occurred or were recognized as job-related during this period on the basis of the type of injury involved: traumatic or occupational; and on the basis of their approval or nonapproval for WCP benefits and compensation or schedule award payments. We chose this period of time because we believed it was current enough to reflect ongoing operations, yet historical enough for most, if not all, of the claims to have been decided upon. Also, in discussing the preliminary results, we generally present our analyses of claim processing times in terms of the “median” time to process cases covered by our review. This means that 50 percent of the cases were processed in the median time or less, and 50 percent of the cases were processed in more time than the median. We did our work from January to May 2002 in accordance with generally accepted government auditing standards. We have not had enough time to fully analyze all of the data we collected, including analyzing the total percentage of claims processed within specified processing standards, or to fully discuss the data with Postal Service or OWCP officials. Accordingly, we are limiting our discussion to median time intervals between the major steps in the WCP claims process up until the time of the decision on the claim and initial compensation payment. Among other things, prior to this hearing, we did not have the time to (1) pinpoint and evaluate specific problems that may have affected the time to process the cases we reviewed, (2) address issues OWCP raised on how the claims processing times might be affected by “administrative closures” or schedule awards, or (3) evaluate numerous other factors that may have affected overall claims processing. Our work has not included an analysis of any time involved in the appeal process of any claim we reviewed, nor did we evaluate the appropriateness of OWCP’s decisions on approving or denying the claims. More detail about our sampling plan is presented in appendix I. Although OWCP is charged with implementing the WCP, there is a federal partnership between OWCP and the employing federal agencies for administering the WCP. In this partnership, federal agencies, including the Postal Service, provide the avenue through which injured federal employees prepare and submit their notice of injury forms and claims for WCP benefits and services to OWCP. Additionally, employing agencies are responsible for paying normal salary and benefits to those employees who miss work for up to 45 calendar days, during a 1-year period, due to a work-related traumatic injury for which they have applied for WCP benefits. After receiving the claim forms from the employing agencies, OWCP district office claims examiners review the forms and supporting evidence to decide on the claimant’s entitlement to WCP benefits or the need for additional information or evidence, determine the benefits and services to be awarded, approve or disapprove payment of benefits and services, and manage and maintain WCP employee case file records. If additional information or other evidence is needed before entitlement to WCP benefits can be determined, OWCP generally corresponds directly with the claimant or the WCP contact at the applicable Postal Service locations. OWCP regulations require that evidence needed to determine a claimant’s entitlement to WCP benefits meet five requirements. These requirements are as follows: 1. The claim was filed within the time limits specified by law. 2. The injured or deceased person was, at the time of injury or death, an employee of the United States. 3. The injury, disease, or death did, in fact, occur. 4. The injury, disease, or death occurred while the employee was in the performance of duty. 5. The medical condition for which compensation or medical benefits is claimed is causally related to the claimed job-related injury, disease, or death. Such evidence, among other things, must be reliable and substantial as determined by OWCP claims examiners. If the claimant submits factual evidence, medical evidence, or both, but OWCP determines the evidence is not sufficient to meet the five requirements, OWCP is required to inform the claimant of the additional evidence needed. The claimant then has at least 30 days to submit the evidence requested. Additionally, if the employer–in this case, the Postal Service–has reason to disagree with any aspect of the claimant’s report, it can submit a statement to OWCP that specifically describes the factual allegation or argument with which it disagrees and provide evidence or arguments to support its position. According to the files we reviewed, about 99 percent of the Postal Service employees’ traumatic injury claims contained evidence related to the five requirements set by OWCP regulations. About 1 percent of the traumatic injury claims were not approved, according to the case files we reviewed, because evidence was not provided for one or more of the requirements. About 97 percent of the claims filed by Postal Service employees for occupational disease claims contained evidence related to the five requirements. The remaining claims, or about 3 percent, did not include all of the required evidence. Generally, the evidence not provided for both types of claims pertained to either (1) the employee’s status as a Postal Service employee or (2) whether the claim was filed within the time limits specified by law. We did not evaluate OWCP’s decisions regarding the sufficiency of the information provided. During the period covered by our review, OWCP regulations required an employee who sustained a work-related traumatic injury to give notice of the injury in writing to OWCP using Form CA-1, “Federal Employee’s Notice of Traumatic Injury and Claim for Continuation of Pay/ Compensation,” in order to claim WCP benefits. To claim benefits for a disease or illness that the employee believed to be work-related, he or she was also required to give notice of the condition in writing to OWCP using Form CA-2, “Notice of Occupational Disease and Claim for Compensation.” Both notices, according to OWCP regulations, should be filed with the Postal Service supervisor within 30 days of the injury or the date the employee realized the disease was job-related. Upon receipt, Postal Service officials were supposed to complete the agency portion of the form and submit it to OWCP within 10 working days if the injury or disease was likely to result in (1) a medical charge against OWCP, (2) disability for work beyond the day or shift of injury, (3) the need for more than two appointments for medical examination/or treatment on separate days leading to time lost from work, (4) future disability, (5) permanent impairment, or (6) COP. OWCP regulations, during the period covered by our review, did not provide time frames for OWCP claims examiners to process these claims. Instead, OWCP’s operational plan for this period specified performance standards for processing certain types of WCP cases within certain time frames. Specifically, the performance standard for processing traumatic injuries specified that a decision should be made within 45 days of its receipt in all but the most complex cases. The performance standards for decisions on occupational disease claims specified that decisions should be made within 6 to 12 months, depending on the complexity of the case. The case files we reviewed indicated that the length of time taken to process a claim–from the date of traumatic injury or the date an occupational disease was recognized as job-related to the date the claimant’s entitlement to benefits was determined–varied widely. For example, we estimate that 25 percent of the claims were processed in up to 48 days for traumatic injury and in up to 78 days for occupational disease. We estimate that 90 percent of the claims were processed in up to 307 days for traumatic injury and in up to 579 days for occupational disease. Finally, we estimate that 50 percent of the claims were processed in up to 84 days for traumatic injuries and in up to 136 days for occupational disease. Specifically, Postal Service employee claims for injuries or diseases covered by our review took the median times shown in table 1 to complete. The median elapsed time taken by Postal Service employees and Postal Service supervisors met the applicable time frames set forth in OWCP regulations. As shown in table 1, the median time taken by Postal Service employees to prepare and submit the claim forms needed to make a determination on their entitlement to WCP benefits for traumatic injuries to the Postal Service supervisor was 2 days from the date of the injury, well within the 30-day time frame set by OWCP regulations. For occupational disease, Postal Service employees signed and submitted the notice of disease form to the Postal Service supervisor in a median time of 26 days from the date the disease was recognized as job-related, or 4 days less than the 30-day time frame set by OWCP regulations. Upon receipt, the Postal Service supervisor then took up to a median time of 11 calendar days–also within the time limit of 10 working days set forth in the regulations–to complete the form and transmit it to OWCP. Also as shown in table 1, once OWCP received the form from the Postal Service, our preliminary analysis showed that OWCP claims examiners processed these notice of injury forms for traumatic injuries in a median time of 59 days to determine a claimant’s entitlement to WCP benefits. As mention earlier, the performance standard for these types of cases was 45 days, or 14 days less than the median time taken. According to OWCP officials, the 59-day median processing time inappropriately included the time during which certain types of claims were “administratively closed,” then reopened later when a claim for compensation was received. We plan to determine the effect to which these types of claims may have affected the processing times as we complete our review. For occupational disease claims, the data showed that OWCP processed these forms at the median time of 63 days, which was within the 6 to 12-month time frame for simple to complex occupational disease cases specified by OWCP’s performance standards. During the period covered by our review, OWCP regulations stated that when an employee was disabled by a work-related injury and lost pay for more than 3 calendar days, or had a permanent impairment, the employer is supposed to furnish the employee with Form CA-7, “Claim for Compensation Due to Traumatic Injury or Occupational Disease.” This form was used to claim compensation for periods of disability not covered by COP as well as for schedule awards. The employee was supposed to complete the form upon termination of wage loss–the period of wage loss was less than 10 days or at the expiration of 10 days from the date pay stopped if the period of wage loss was 10 days or more–and submit it to the employing agency. Upon receipt of the compensation claim form from the employee, the employer was required to complete the agency portion of the form and as soon as possible, but not more than 5 working days, transmit the form and any accompanying medical reports to OWCP. For the period covered by our review, OWCP regulations did not provide time limits for OWCP claims examiners to process these claims. Instead, OWCP’s annual operational plan for the period of our review specified a performance standard for processing wage loss claims. Specifically, the performance standard stated that all payable claims for traumatic injuries– excluding schedule awards–should be processed within 14 days. This time frame was to be measured from the date OWCP received the claim form from the employing agency to the date the payment was entered into the automated compensation payment system. No performance standard was specified for occupational disease compensation claims. The case file data showed that the processing time—from the date the claim for compensation was prepared to the date the first payment was made–varied widely. For example, we estimate that to process 25 percent of the claims, it took up to 28 days for traumatic injuries and up to 32 days for occupational diseases. To process 90 percent of the claims, it took up to 323 days for traumatic injuries and up to 356 days for occupational diseases. To process 50 percent of the claims, it took up to 49 days for the traumatic injuries and up to 56 days for the occupational diseases. Specifically, the median times to process the claims for compensation for the traumatic injury and occupational disease claims covered by our review are shown in table 2. The case files we reviewed did not contain the information that would have enabled us to determine whether the claims for compensation were prepared and filed by the employees within the time frame set forth by OWCP regulations. However, as shown in table 2, once a claim was prepared, at the median time, we found that after receipt of a claim for compensation for a traumatic injury, the Postal Service supervisor completed the agency portion of the form and transmitted it to OWCP in 4 calendar days, which was less than the 5 working days required by OWCP regulations. For occupational disease compensation claims, we found that upon receipt of the claim form from the employee, the Postal Service supervisor took 7 calendar days, which was also within the 5 working day requirement imposed by OWCP regulations, to transmit the claims to OWCP. As also, as shown in table 2, once OWCP received a traumatic injury compensation claim form, the median time for OWCP claims examiners to process the claim was 23 days, which was longer than the 14 days specified by OWCP’s performance standard–excluding schedule awards. However, our data included claims for schedule awards. As mentioned earlier, prior to this hearing we did not have time to evaluate the effect that schedule awards might have had on the median processing time. We plan to do so in our analysis for the final report. For occupational disease claims, our analysis showed that upon receipt, OWCP claims examiners, at the median processing time, took 22 days to make the initial payment for the approved claims. OWCP did not specify a performance standard for occupational disease claims. Finally, our preliminary analysis of case file data showed that during the time between the date of injury or recognition of a disease as job-related, injured employees often (1) continued working in a light-duty capacity, (2) received COP while absent from work, or (3) went on paid annual or sick leave until the time they actually missed work and their pay stopped. In fact, the data showed that the median elapsed time from the date the injury occurred or the disease was recognized as job-related to the beginning date of the compensation period was 98 days for traumatic injuries and 243 days for occupational disease claims. Mr. Chairman, this concludes my prepared statement. I will be pleased to answer any questions you or other Members of the Subcommittee may have. For further information regarding this testimony, please contact Bernard Ungar, Director, or Sherrill Johnson, Assistant Director, Physical Infrastructure Issues, at (202) 512-4232 and (214) 777-5699, respectively. In addition to those named above, Michael Rives, Frederick Lyles, Melvin Horne, John Vocino, Scott Zuchorsky, Maria Edelstein, Lisa Wright- Solomon, Brandon Haller, Jerome Sandau, Jill Sayre, Sidney Schwartz, and Donna Leiss made key contributions to this statement. | In fiscal year 2002, U.S. Postal Service employees accounted for one-third of both the federal civilian workforce and the $2.1 billion in overall costs for the Federal Workers' Compensation Program (WCP). Postal workers submitted half of the claims for new work-related injuries that year. Postal Service employees with job-related traumatic injuries or occupational diseases almost always provided the evidence required to make a determination on their entitlement. In two percent of the cases, the Office of Workers' Compensation Program (OWCP) found that evidence was missing for one or more of the required elements. However, the length of time taken to process claims varied widely even though all were subject to the same OWCP processing standards. OWCP claims examiners took 59 days to process traumatic injury claims after receiving the notice of injury claim forms from the Postal Service--a process that should take 45 days for all but the most complex cases, according to OWCP performance standards. The case files lacked the information necessary to determine whether the claims for compensation were prepared and filed by the employees within the time frame set by OWCP regulations. OWCP claims examiners took 23 days to process traumatic injury compensation claims for wage loss and schedule awards. OWCP's performance standard states that all payable claims should be processed within 14 days from the date of receipt. |
About 1.2 million years ago, a volcano erupted and collapsed inward, forming the crater now known as Valles Caldera, in north-central New Mexico (see fig. 1). Almost entirely surrounded by the Forest Service’s Santa Fe National Forest and the National Park Service’s Bandelier National Monument, this geologically and ecologically unique area covers about 89,000 acres of meadows, forests, hot springs, volcanic domes, and streams supporting elk herds, fish, and other wildlife. While in private hands, the Baca Ranch was operated as a working ranch, providing grazing for livestock plus hunting and fishing for a limited number of visitors. According to the Preservation Act, the working ranch arrangement was to continue after ownership was assumed by the federal government. The act also calls for the Trust to protect and preserve the land while attempting to achieve a financially self-sustaining operation. “Financially self-sustaining,” as defined by the act, means that management and operating expenditures—including trustees’ expenses; salaries and benefits; administrative, maintenance, and operating costs; and facilities improvements—are to equal or be less than proceeds derived from fees and other receipts for resource use and development. Appropriated funds are not to be considered. To carry out its duties, the Trust has the authority to solicit and accept donations of funds, property, supplies, or services from any private or public entity; negotiate and enter into agreements, leases, contracts, and other arrangements with any individual or federal or private entity; and consult with Indian tribes and pueblos. The Trust’s Board consists of nine trustees. The President of the United States appoints seven of these trustees, and the other two are the Supervisor of Santa Fe National Forest and the Superintendent of Bandelier National Monument, under the jurisdiction of the Department of the Interior’s National Park Service. Of the seven presidential appointees, who are selected in consultation with New Mexico’s congressional delegation, five must be New Mexico residents. Appointees are to be selected on the basis of their expertise or experience, as follows: one trustee each (1) with livestock and range management expertise; (2) with expertise in recreation management; (3) who is knowledgeable in sustainable management of forest lands for commodity and noncommodity purposes; (4) with expertise in financial management, budget and program analysis, and small business operations; (5) who is familiar with the cultural and natural history of the region; (6) who is active in a nonprofit conservation organization concerned with Forest Service activities; and (7) who is active in state or local government activities in New Mexico, with expertise in the customs of the local area. Trustees are appointed to 4-year terms and can be reappointed; no trustee, however, may serve more than 8 consecutive years. The trustees select a chairman from the Board’s ranks. With the exception of the Board Chair, trustees serve without pay, although they are reimbursed for travel and subsistence expenses while performing their duties. The Board must hold at least three public meetings a year in New Mexico. An executive director, who is hired by the Board, oversees the Trust’s day-to-day operations. Although the Trust has taken steps to establish and implement a number of programs and activities to achieve the goals of the Preservation Act, it is behind the schedule it set for itself in 2004. A number of factors, such as high turnover among Board members and key management staff, have contributed to this slow progress, according to former and current Board members and staff. As we reported in 2005, the Board’s first steps were to establish a basic organization and to acquaint itself with conditions at the preserve. In 2001, the Board held regular meetings and listening sessions with the public and gathered views on how the preserve should be managed. The Board hired its first employee, an executive director, in October 2001 and, in December 2001, issued 10 principles to guide future decision making. These principles focused on a long-term view, emphasizing the ideas of landscape protection, sound business management and good-neighbor relations, the role of science in defining programs, and the quality of experiences to be provided to the public at the preserve. Overall, these principles constituted the Trust’s initial philosophy and foundation for the programs and activities that the Trust undertook to fulfill the Preservation Act’s goals. The following sections describe some of the Trust’s accomplishments. Shortly after the federal government assumed ownership of the preserve, the Trust learned that the existing infrastructure—such as roads, buildings, fences, and water treatment facilities—was in disrepair and needed rehabilitation. All the roads needed upgrading, fences were falling down, rodents had invaded all the structures, and the water supply system was not functioning. Work began immediately, and it continues today. The preserve has about 1,000 miles of roads, including 140 miles of main access roads. Road building into the preserve began in 1935, and by the 1970s, more than 800 miles of logging roads had been bulldozed into high- elevation forests, causing erosion and damaging downhill streams and wetland areas (see fig. 2). On assuming its management role, the Trust determined that the existing roads could not be readily used to support administration, ranching, recreation, or other needs. Since then, the Trust has upgraded over 14 miles of road to all-weather gravel standards, so they are usable for passenger vehicles and are not as environmentally damaging. To enhance safety and public viewing of the preserve, the Trust also installed kiosks, scenic turnouts, and a new gate (see fig. 3); in addition, it reconfigured the entry to and exit from New Mexico Highway 4, the main access road to the preserve. The Trust has systematically numbered and mapped a network of about 184 miles of roads, which provide open public access, as well as restricted access for the Trust’s land management activities. At the time of the federal government’s purchase, the preserve had numerous existing buildings, fences, and other structures. In 2002, the Trust recognized that the majority of its structures needed major restoration to bring them up to local building codes. Over the next 6 years, the Trust conducted minor maintenance on the ranch buildings used to house employees and documented the condition of structures of historic value throughout the preserve (see fig. 4). In addition, the Trust repaired the preserve’s 54 miles of boundary fences—including adjusting their height to allow for elk movement—and installed signs restricting access to the preserve. The Trust also assessed the layout and condition of 64 miles of interior fences, many of which were used to separate pastures for livestock. Other facilities, such as livestock corrals, have also been assessed and rehabilitated, and in 2009 a new temporary visitor building was purchased and placed on site (see fig. 5). Regarding water supplies, when the federal government acquired the preserve, the existing water treatment facility was not functioning, so no potable water was available. Rehabilitating this facility became one of the Trust’s top priorities. Repairs to the water collection and filtration systems were completed in 2004, the water distribution system was repaired in 2005, and potable water became available in spring 2006. Still, the present water supply freezes during the winter and can dry up during the summer; the Trust is therefore evaluating groundwater reserves and options for drilling a well to supply water year-round. In the end, rehabilitating deteriorating infrastructure has proven to be an expensive and time-consuming endeavor, and the Trust’s efforts have not begun to address capital improvements, such as permanent visitor facilities or roads in support of the Preservation Act’s goals. Indeed, as of 2008, the Trust still faced nearly $1.2 million in deferred maintenance costs for existing buildings alone. From the time it first articulated the principles by which it would manage Valles Caldera, the Trust viewed science as key to protecting and preserving the land while developing programs that could bring in revenue. It committed to using science in an “adaptive management” framework, by continuously gathering and applying site-specific scientific knowledge. According to the Trust’s Framework and Strategic Guidance for Comprehensive Management, the chief characteristic of the Trust’s view of adaptive management is the monitoring of natural systems and the human activities impinging on those systems, coupled with use of the monitoring information to guide and, when needed, revise management goals and activities. Thus, according to Trust documents, the Trust makes land management decisions on the basis of scientific research and monitoring, taking into account the public’s views and federal environmental requirements. The science program includes three components: inventorying natural resources, monitoring environmental changes resulting from the Trust’s programs, and conducting research that will help manage the preserve’s resources. Up and running in 2003, this program assists the Trust in complying with federal environmental requirements, including those of the National Environmental Policy Act of 1969 (NEPA). By 2008, the Trust had assessed or was assessing most of the preserve’s natural resources, such as its forests, biodiversity, watershed and stream health, fish habitats, ground water quality, and geology and soils. In inventories of cultural resources, the Trust has also uncovered over 430 historic and archaeological sites. Such inventories will continue to be done as needed before construction projects or other ground disturbance to comply with NEPA guidelines. In addition, to assess the effects of activities such as grazing, recreation, or forest thinning, the Trust has established long-term programs to monitor ecological conditions, including climate, stream water quality, and plant and animal habitat and population dynamics. Finally, in collaboration with universities, federal and state agencies, and other research entities, the Trust has hosted a wide range of research programs, ranging from a study of the ecological drivers of rodent-borne diseases to earth-coring studies of past climate change. For example, hydrological research funded by the National Science Foundation through the University of Arizona is to provide information to aid in the day-to-day management of the preserve and also contribute to the understanding of hydrologic systems overall. This research should help scientists understand how much precipitation the preserve’s lands absorb and predict the amount of runoff into its streams and rivers. As more data become available, scientists may be able to forecast the effect of precipitation and drought on water quality and forage availability on the preserve and to use the information to drive future management decisions about livestock and recreation. Each year the Trust has generated between $1 million and $2 million of externally funded research. To further enhance and communicate the results of the science program, the Trust in August 2009 leased a facility in the town of Jemez Springs, 20 miles west of the preserve’s main gate, as a new science and education center adjacent to the Trust’s administrative headquarters. The facility is to accommodate a laboratory, classrooms, offices, a dining hall, and lodging for visitors participating in the center’s formal and informal science education programs for all age groups. Given that the Preservation Act requires keeping the preserve as a working ranch, grazing has been a central activity since the Trust began. Over the years, the grazing program’s objectives, scope, and size have changed repeatedly, in response to annual scientific assessments of forage availability, as well as shifting directives from the Board. In addition, because the preserve is federal land, continued grazing requires completion of a NEPA environmental assessment. The Trust’s ultimate goal is to manage its livestock operations for multiple aims, including revenue generation, local community benefit, research, and public education. To date, the Trust has experimented with a number of grazing programs, beginning in 2002 with a small drought-relief program that allowed just over 700 cow-calf pairs belonging to local ranchers to graze on preserve pastures for 5 weeks. The Trust also hosted a “conservation stewardship” program for local ranchers, allowing about 200 cattle in each of 2 years to graze on preserve lands for about 4 months while the ranchers implemented conservation measures on their own lands. In addition, the Trust has conducted a breeding program for 3 years to benefit local ranchers and has tested varied cattle management approaches in an attempt to make the program profitable for the Trust. In 2006, because of drought, the Trust switched its focus to research assessing the effects on cattle forage of controlled burning of the grasslands; initial findings suggested that such burning improved forage quality. Then in 2008, the Trust attempted to make a profit from grazing, allowing nearly 2,000 head of cattle to graze at the preserve over a 4-month period and generating about $58,000 in gross revenues. Because the cattle were brought in from Mexico and were sold in Texas, this effort drew local criticism. Moreover, the sheer number of cattle created conflicts with fishing and other recreational activities. In 2009, the program again took on a research emphasis and aimed to benefit local communities. The preserve’s lands encompass more than 60,000 forested acres. When the Trust was first established, these forests were envisioned as a possible source of revenue toward the Preservation Act’s purpose of providing for the multiple use and sustained yield of the preserve’s renewable resources. But the Trust’s forest inventory in 2006 revealed a lack of marketable timber, partly because of intensive logging in the past. As a result of this logging and past fire suppression, about half the preserve’s forested acres contain dense vegetation that pose a very high risk of wildland fire. To date, therefore, the Trust’s forest management efforts have focused on restoring forest health, reducing the risk of large fires, and protecting watersheds. These efforts have also included identifying the most effective means of reducing hazardous fuels and a potential market for the sale of wood products (poles, mulch, pellets), sometimes in collaboration with local businesses. Beginning in 2002, the Trust granted the public limited access to the preserve for recreation; in most cases, it has charged a fee for this access. In the beginning, public recreation was confined to guided hikes or van tours. Over the next several years, the Trust allowed varied summer and winter activities, including: Hunting. The Trust has worked with New Mexico’s Department of Game and Fish to hold elk hunts since 2002. In 2008, the Trust added a spring turkey hunt. Fishing. In 2003, the Trust granted 1,785 people access to the preserve’s two fishable streams, on a first-come, first-served basis. The Trust also holds adult and youth fishing clinics. In 2009, it began allowing anglers to drive their own vehicles to parking areas near assigned stream reaches, instead of providing van transportation as in previous years. Hiking. Visitors have been allowed to hike at the preserve since 2002, first in guided hikes, then on their own. The Trust has increased the number and mileage of available hiking trails, opening about 30 miles of trails to hikers, including 5 miles requiring no fee. Other recreational activities. The Trust has also offered horse-drawn wagon rides, sleigh rides, van tours, snowshoeing, cross-country skiing, stargazing lectures, horseback riding, marathon runs, mountain biking, group tours and seminars, workshops, antler collection, and overnight photographic and birding excursions. In 2006, the Trust also hosted its first free open house, which drew more than 1,400 cars and nearly 4,000 people. The Trust used this event to inform the public about then-current programs and future opportunities and to monitor the effects of so many visitors. Since 2008, the preserve has been open 7 days a week from April through September for summer recreation and events and fewer days the rest of the year to accommodate hunting and winter activities. The Preservation Act’s findings and purposes section states, among other things, that the Baca Ranch could serve as a model for sustainable land development and use of timber, grazing, and recreation and that management of the ranch through a trust would eventually allow the ranch to become financially self-sustaining. Over its existence, the Trust recognized it had no marketable timber, but it has experimented with a number of grazing options and expanded recreational opportunities. Collectively, from 2005 through 2008, the Trust’s grazing, recreation, and other activities have generated, on average, about $733,000 in gross revenues per year (see table 1). In comparison, from 2000 through 2009, the Trust received nearly $31 million in federal funding—an average of about $3.5 million per year over the time frame. Faced with average gross revenues amounting to about 20 percent of average federal funding, the Board of Trustees contracted with an independent consulting firm in 2008 to develop a revenue enhancement study aimed at realizing annual revenues of about $5 million. Made public in April 2009, this document details various options for generating revenues of this scale and bringing the Trust to financially self-sustaining status by the end of fiscal year 2015. These options include high-end elements such as a luxury lodge, as well as more modest elements such as tent camps. The options could be mixed and matched to produce a plan that the Trust could use as it decides how to further develop infrastructure and public programs at the preserve. According to the Trust, many of the options described in this document are to be incorporated into the alternatives the Trust is evaluating in preparing the environmental analyses called for by NEPA before it can provide for greater public access and use of the preserve. The Trust has not met the timeline that it set for itself to meet the Preservation Act’s goals, as outlined in a required report to Congress in 2004. The timeline called for achieving financially self-sustaining status in three phases over 15 years, a schedule reiterated in the Trust’s 2005 Framework and Strategic Guidance. Phase 1, institution building, was to take place from 2001 through 2005. During this phase, the Trust was to develop the staff and tools needed to manage the preserve as a wholly owned government corporation, including accounting systems and support mechanisms for its science- based adaptive management approach. No new roads or facilities were to be constructed during phase 1; rather, all public programs were to use existing infrastructure and temporary buildings and would therefore not require a full environmental assessment or environmental impact statement under NEPA. Phase 2, program development, was to take place from 2005 through 2010. During phase 2, the Trust envisioned completing NEPA analyses for major infrastructure projects and beginning construction for an array of programs, such as an integrated road and trails system, an interpretive center, and a science and education facility. Phase 3, program refinement, was to unfold from 2010 through 2015. During phase 3, the Trust planned to cultivate additional sources of funds and streamline programs to permit decreasing reliance on federal appropriations as revenue-generating programs expanded. It was believed that the experience gained in the prior phases would enable the Trust to increase revenues and decrease costs in time to be self-sustaining by the end of fiscal year 2015. As of September 2009, only the science and grazing programs at the preserve have moved into phase 2 of the Trust’s envisioned timeline. The Trust’s publication in 2003 of its own NEPA regulations and its adaptive management framework marked the passage of the science program into phase 2. With completion of a forage environmental assessment in January 2009, the grazing program moved into phase 2. For recreation and associated infrastructure development to move into phase 2, a public use and access plan including NEPA compliance—which is due in mid-2010— must be completed. For the Trust’s forest management program, too, a NEPA analysis will have to be done to move into phase 2. Thus, at the close of fiscal year 2009, the Trust continued to work mostly on phase 1 of its programs and activities—at least 5 years behind its anticipated schedule (see fig. 6). Current and previous Trust Board and staff members have all identified certain factors as contributing most significantly to delays in the Trust’s progress. Key among these factors is high turnover among Board members. Under the Preservation Act, at least three Board positions are up for appointment every 2 years. In addition, members may resign for personal reasons before completing their term of appointment, and the two ex officio Board members from the Forest Service and the Park Service may change according to how they are assigned within their own agencies. A time lag—ranging from 2 to 9 months—inevitably occurs between the end of some members’ terms and the beginning of others’. Thus, it can take months before a full Board is seated once again. New members face a learning curve. The result of such frequent turnover has led to delays in decision making, as well as false starts to programs. For example, an environmental assessment that needed to be completed before permanent livestock operations could be put in place was restarted three times before it was finally completed in 2009, largely because of Board turnover. The Trust has also experienced high turnover among key management staff. Within its first 7 years, nine people served as acting executive director or executive director; the most recent executive director reported for duty in January 2009. The chief administrative officer position also turned over four times. In addition, the position of communications manager—key to the Trust’s obligation to communicate and collaborate with the public—remained vacant for 3 years, until 2009. Among the Trust’s key management staff, only the preserve general manager, who is responsible primarily for the preserve’s natural resources, infrastructure, and recreational programs, and the preserve science and education director, who is responsible for and has developed the science and education programs, have been with the Trust since they were first hired, in 2002 and 2003, respectively. In addition, according to the Trust’s Board and staff, they discovered upon assuming their responsibilities that the preserve’s cultural and natural resources and infrastructure were not as healthy or robust as they had expected or as described in the opening to the Preservation Act. For example, road building and timber cutting in high-elevation forests had been done since the early 1930s, and streamside and other areas had been damaged by logging roads and overgrazing. Forests clear-cut in the 1960s and 1970s had been replaced by dense stands of young trees that provide little marketable timber and present a wildland fire hazard. Further, the act directed the Trust to open the preserve for public recreation within 2 years after the federal government purchased the land. As a result, the Trust found itself with more ecological restoration and infrastructure rehabilitation to do than expected—even while providing public access to the preserve—almost immediately after it assumed active management of the land in August 2002. Finally, almost everyone we interviewed observed that one or more of the foregoing factors contributed to the Trust’s inability to focus on establishing itself as a fully functioning government corporation, which in turn exacerbated the effects of Board and staff turnover. Ultimately, these shortcomings raised serious concerns among interest groups and the public about whether the Trust could successfully manage the preserve in the manner envisioned by the Preservation Act. As of September 2009, the Trust had yet to develop and put in place several key elements of an effective management control program for a government corporation, as required under GPRA and as we recommended in our previous report. Specifically, the Trust had not clearly defined a long-term strategic plan, developed annual performance plans, or systematically monitored and reported its progress. Additionally, the Trust’s financial management has been weak. Consequently, it has been difficult for Congress and the public to understand the Trust’s long- term goals and objectives, annual plans and performance, or progress. For government agencies and corporations, GPRA and GCCA specify the means to achieve an effective management control program. That is, they establish a framework for government entities to provide reasonable assurance that an organization’s operations are effective and efficient, that its financial reporting is reliable, and that the organization is complying with applicable laws and regulations. This framework includes, among other components, (1) a strategic plan with long-term, measurable goals and objectives; (2) annual performance plans for achieving the strategic plan’s goals and objectives; (3) performance monitoring and reporting; and (4) annual management reviews and financial audits. Such plans, methods, and procedures are collectively known as internal, or management, controls. Under GPRA, a federal agency is required to develop a strategic plan that covers a period of at least 5 years, to be updated every 3 years, and includes the agency’s mission statement, identifies its long-term strategic goals and objectives, describes strategies to achieve those goals and objectives, explains the relationship between long-term and annual goals, analyzes key external factors, and specifies how and when program evaluations will be conducted. GPRA further requires each agency to submit an annual performance plan, which must establish performance goals that link the goals of the agency’s strategic plan directly with managers’ and employees’ day-to-day activities. In essence, this plan is to set forth the yearly performance goals the agency will use to gauge progress toward the strategic goals, identifies performance measures the agency will use to assess its progress, explains the procedures the agency will use to verify and validate its performance data, and ties these goals and measures with the processes and resources the agency will use to meet performance goals. In addition, GPRA requires agencies to report each year, usually to the President and Congress, on program performance for the previous fiscal year. This annual performance report should describe the performance indicators established in the agency’s annual performance plan and the actual program performance achieved compared with the performance goals. It should also explain why a performance goal has not been met and set forth plans for achieving it. Finally, the plan should also summarize the year’s program evaluations and findings. Key steps and critical practices for GPRA implementation include involving stakeholders in defining missions, plans, and outcomes; producing key results-oriented performance measures at each level of the agency or organization; and using the results of measuring past performance to inform future planning. Under GCCA, a government corporation must submit annual management reports to Congress, including statements of financial position, operations, and cash flow; a budget report reconciliation; a report summarizing the results of an annual financial audit; and other information about operations and financial status. GCCA also requires that the corporation’s financial statements be independently audited in accordance with generally accepted government auditing standards. Finally, under the Preservation Act, the Trust is required to report annually to Congress on its activities. These reports are to be “comprehensive and detailed report of operations, activities, and accomplishments for the prior year, including information on the status of ecological, cultural, and financial resources . . . and benefits provided by the Preserve to local communities” and “shall also include a section that describes the Trust’s goals for the current year.” The law also requires preparation of an annual budget. We reported in 2005 that the Trust lacked a GPRA-compliant strategic plan and recommended that it develop such a plan. Although the Trust agreed with our recommendation, it still did not have a plan in place as of September 2009. The Trust has, however, produced two documents (one of them in response to a previous recommendation from us) that offer some strategic guidance, although neither of these meets GPRA requirements or was used as a formal strategic plan. The first guidance document was the 2005 Framework and Strategic Guidance for Comprehensive Management, which presents the values and vision the Trust was to apply in making management decisions. The document articulates the Trust’s commitment to the various goals of the Preservation Act, including operating the preserve as a working ranch according to principles of science-based adaptive management, striving toward financial self-sufficiency, and making the preserve accessible to visitors. As we observed in our 2005 report, the 187-page document describes, among other things, the preserve’s history and natural features; the Trust’s approach to decision making; and public involvement at the preserve, including a range of potential public uses, from hunting and fishing to hiking and camping. The mission of the Valles Caldera Trust is to operate the preserve as a working ranch; to become financially self-sustaining; to meet the varied needs of visitors; to utilize and steward the multiple resources of the preserve; and to work collaboratively with our neighbors. The document also outlines six goals—which the Trust labeled alternately as “actions” or “near-term goals”—each accompanied by a desired outcome, objectives, strategies or actions, and metrics. For example, one of the six near-term goals is to evaluate existing facilities and identify needs for additional infrastructure; eight strategies and actions are given for achieving the objectives for that goal. The desired outcome is “identification of essential infrastructure” to support operations and “achievement of financial self-sustainability,” and one of the objectives is to improve the entrance to the preserve and visitor service center. To fulfill this objective, the document states that the Trust will engage a contractor to design and improve the preserve’s entrance and gives as the metric for measuring progress the completion of a new preserve entrance during fiscal year 2007. Both the 2005 and 2006 documents fall short of GPRA’s requirements for effective strategic planning in a number of respects. For example, despite its broad and philosophical articulation of the Trust’s guiding principles— essentially, the Trust’s vision and mission—the 2005 Framework and Strategic Guidance does not meet GPRA’s requirements for a formal and detailed strategic plan. Indeed, title aside, this document never claims to be a formal strategic plan. In its own words, the document does “not intend to present a blueprint for future management of the preserve” but rather to sketch “the range of possible programs the Trust will consider implementing in pursuit of goals.” Likewise, although the 2006 “Strategic Planning Document” combines elements of strategic planning (mission statement, goals, and objectives) with elements of annual performance plans (actions and metrics), it does not cover a 5-year period, has not been updated, does not explain the relationship between long-term and annual goals, does not analyze key external factors, and does not specify how and when program evaluations are to be conducted. Furthermore, according to Trust officials and senior staff, the document was drafted and approved by the Trust’s Board without benefit of guidance or assistance from stakeholders, such as Congress and the public, as expected under GPRA; neither did the Board specifically instruct the staff to implement the actions or monitor the metrics. By failing to develop a strategic plan from the beginning of its operation of the preserve in 2002, as well as failing to craft and adopt a formal strategic plan later, the Trust lost an opportunity to move forward systematically as an institution—independent of personnel turnover in either the Board or staff—toward meeting the Preservation Act’s goals. In September 2009, recognizing the value of better strategic planning, Trust officials told us they were planning to work to develop a GPRA-compliant plan with an outside consultant experienced in developing strategic plans for federal agencies. Since its beginning, the Trust has failed to fully meet GPRA’s annual performance planning, monitoring, or reporting requirements. The Trust has not put together formal annual performance plans containing either specific performance goals for the next fiscal year—goals tied directly to any strategic goals stated in the 2005 Framework and Strategic Guidance or November 2006 strategic planning document—or any performance measures or related information for monitoring its progress. Under GPRA, an annual performance plan must establish yearly performance goals linked to long-term goals of a strategic plan; identify performance measures that will be used to gauge progress toward meeting long-term strategic goals; explain the methods to be used for validating and verifying performance data; and link the goals and measures with the processes and resources, such as staffing and funding, that will be used to meet the performance goals. The only documents that the Trust has produced to date that begin to address these requirements are its 2006 strategic planning document and fiscal year 2008 annual report to Congress. While not labeled an annual performance plan, the 2006 strategic planning document does identify “near-term” (performance) goals and metrics (performance measures) for fiscal year 2007, as well as for fiscal years 2008 and 2009. These goals and metrics, however, are not linked to any long-term strategic goal, as required by GPRA, nor does the planning document meet other GPRA requirements for annual performance plans. In addition, although the Trust’s fiscal year 2008 annual report to Congress identifies goals for the upcoming 2009 fiscal year, along with metrics, neither the goals nor the metrics are linked to any long-term strategic goal or strategy for achieving such a goal. Neither are other requirements for annual performance plans addressed in this annual report. Although the Trust’s fiscal year 2007 annual report identifies 2008 performance goals, without metrics, annual reports before 2007 do not identify either performance goals or metrics for the next fiscal year. In monitoring its performance, the Trust has not established or monitored a stable set of quantitative indicators of progress over time. In its annual reports to Congress, the Trust summarized the past year’s accomplishments and mentioned its intentions for the future, sometimes quantitatively but more often qualitatively. For example, an early two-page report for fiscal year 2004 lists as one preserve goal to “manage public use, access to and the occupancy of the preserve” and notes an accomplishment under this goal as completing a road inventory of 76 miles. The Trust’s plan, as stated, was to use this inventory to develop a transportation plan that was to begin in fiscal year 2007 and be completed in fiscal year 2008; development of this plan was labeled very high priority. But no methods or indicators for tracking the progress of this transportation plan were given. Moreover, although the transportation plan was supposed to begin in 2007 and be completed by 2008, reference to the plan in the Trust’s 2007 annual report to Congress is essentially identical to the wording in its 2006 annual report, and to date, no transportation plan has been developed. Similarly, for the Preservation Act’s goal of achieving financially self-sustaining operations, the Trust’s plan as stated in its 2004 annual report says only that it will implement financially sound business practices, develop and implement a business plan incorporating an annual budget tied to a plan of work for 5 years, and revise this business plan annually; again, the assigned priority is “very high.” Nevertheless, our review of Trust documents found that progress toward implementing these very high-priority plans was not formally monitored, nor were the plans fully executed. In fact, the 2005 annual report copies the wording of the 2004 report with respect to development of a business plan, the 2006 annual report makes no mention of a business plan, and the 2007 annual report lists developing a strategic business plan as one of its goals for 2008. Because it has not developed annual performance plans with performance goals, the Trust has not produced formal annual performance reports as required by GPRA. Since 2006, however, annual reports required by the Preservation Act, as well as a 5-year State of the Preserve report released in 2007, detail the Trust’s operations, activities, and prior year’s accomplishments, including the status of the preserve’s natural, cultural, and financial resources and benefits to local communities. While the Trust’s annual reports before 2006 did not address all these elements, the reports have improved over the years, becoming more detailed and comprehensive. The most recent annual report, for fiscal year 2008, contains major sections devoted to attainment of fiscal year 2008 goals; Trust organization, program accomplishments, and budget; and goals for fiscal year 2009. Each section on fiscal year 2008 goals attained (e.g., develop a strategic business plan) states the goal’s objective (e.g., “to create a business plan that identifies options to generate revenues from programs”); gives the status of progress (e.g., the Trust awarded a contract to a consulting firm to develop this business plan); and offers a brief narrative related to the goal. With respect to goals for 2009, the report states each goal along with a statement of its objective, metric for measuring progress, and related narrative. This annual report and previous ones do not, however, report on the status of current year goals that were not attained or link back to a long-term strategy. The evolution of the Trust’s reports suggests a growing understanding within the organization of the need for key management elements, such as strategic goals, annual performance goals and plans, and measurable performance indicators. Our review of the annual reports nevertheless revealed a lack of consistency in report format, organization, and content from year to year, particularly in relation to measurable indicators of progress. For example, before 2007 the Trust counted and reported only the number of paying visitors to the preserve. In 2007, however, it began to include nonpaying visitors in visitor counts—a key change for understanding the growth in Trust programs. Yet this change in data collection was never explicitly pointed out in the 2007 annual report. Furthermore, given the absence of links in any of these reports directly to metrics listed in the Trust’s November 2006 strategic planning document, it is difficult to follow the progress of one year’s “plan” through subsequent years or to systematically track the Trust’s progress toward accomplishing the Preservation Act’s overarching goals. Compounding the absence of systematic strategic planning and routine performance planning, monitoring, and reporting, the Trust’s financial management has suffered from varied and numerous weaknesses. From when the Trust first took over management of the preserve through fiscal year 2003, the Trust’s finances were administered by the Forest Service. At the beginning of fiscal year 2004, the Trust briefly attempted to do its own accounting in house. When this attempt failed, however, partly because of turnover in accounting staff, it shifted these functions to the Department of the Interior’s National Business Center, which provided accounting services from fiscal year 2004 until fiscal year 2008. At the start of fiscal year 2008, the Trust once again moved its accounting operations, to the Forest Service’s Albuquerque Service Center, so as to bring its finances under a single, integrated financial management system and to reduce costs. In part because of poor financial management and accounting practices, inadequate records, and difficulties in hiring and retaining accounting staff, until 2007 the Trust could not produce financial statements that would have enabled it to fulfill its obligation to undergo an annual independent audit, as required by GCCA. As we reported in 2005, the Trust contracted in 2003 with an independent accounting firm for auditing services, but the firm recommended that the audit be postponed because the Trust lacked the financial policies, procedures, and records needed to produce auditable financial statements. It took several years for the Trust to reconstruct its financial transactions and prepare any auditable statements. At the end of 2007, an independent auditing firm was contracted. The firm completed its work in 2009, producing independent auditor’s reports for fiscal years 2005 through 2008. The auditor’s reports found numerous weaknesses in the Trust’s accounting, management control, and compliance with applicable laws and regulations. For example, the audit report for fiscal year 2008 found “material weaknesses” and “significant deficiencies” ranging from a lack of documented policies and procedures to the lack of a secure information technology system and failure to properly process cash and check payments. Consequently, according to the auditor’s report, decisions made by the Trust on the basis of deficient information could themselves be inaccurate or misleading. Moreover, because the Trust had not identified such deficiencies, it could not and did not report them to Congress. Among its other findings, this report also confirmed the lack of performance goals and objectives in compliance with GPRA requirements. The audit reports for all the audited fiscal years thus cast considerable doubt on the accuracy and completeness of the Trust’s annual or other reports to date and its degree of compliance with applicable laws. As a result of the auditor’s reports, the Trust has made an effort to improve its management control framework. In July 2009, for example, the Trust asked the Albuquerque Service Center to conduct an “internal control assessment” of the Trust’s operations, which the center had begun to do as of the end of fiscal year 2009. Once completed, this assessment could help improve the Trust’s management controls. In managing a remote, undeveloped expanse of public land under the public-private experiment created by the Preservation Act, the Trust is breaking new ground. In accordance with the act’s goals, the Trust is responsible for preserving and protecting the preserve’s resources while generating revenues from these resources. The long-term vision articulated in the Preservation Act is for the Trust to become a self- sustaining entity, without need for federal funding. Yet the current Board chairman and the Trust’s executive director believe that, of all the goals for the foreseeable future, becoming financially self-sustaining is the most challenging. A consensus among Board members is that the Trust will not become financially self-sustaining by the end of fiscal year 2015 as envisioned by the Preservation Act; a few within the Trust doubt that this goal can ever be achieved. In particular, as for other multiple-use land management agencies, a daunting corollary to the Trust’s mission is how to balance managing the land to produce a sustained yield of revenue- generating resources with preserving and protecting those resources and other natural and cultural values of the preserve. Others external to the Trust, such as Los Amigos de Valles Caldera and Caldera Action, have expressed similar views about the Trust’s ability to become financially self-sustaining. Nevertheless, the Trust is continuing to explore opportunities for becoming financially self-sustaining. As of the end of fiscal year 2009—nearly halfway through the 20-year public-private land management experiment and about 6 years before the authorization for Trust appropriations expires—the Trust had only begun to focus on the goal of becoming financially self-sustaining. A number of issues—such as its remaining life expectancy, activities capable of providing sufficient revenues, funds for needed key capital investments, and legal issues—present significant challenges to achievement of this goal. These challenges include the following: Completing key steps to becoming financially self-sustaining in the time remaining before the end of fiscal year 2015, when the current authorization of appropriations expires. If the Trust is not well on its way toward becoming financially self-sustaining by the end of fiscal year 2015, the Trust may or may not have the funds to continue operating, regardless of how much or how little progress it has made on its various land management and recreation programs. Yet within the 6 years from the beginning of fiscal year 2010 to the end of fiscal year 2015, the Trust must develop a public use and access plan, including an environmental impact statement and an associated transportation plan; secure funding to implement these plans; begin and complete construction; and then begin operating the programs to generate revenues. All these activities could well take longer than 6 years. Identifying, developing, or expanding revenue-generating activities that would enable the Trust to raise sufficient funds to become financially self- sustaining. To date, several anticipated sources of revenue have not materialized or have not materialized to the degree anticipated. For example, the vision of timber production as a major source of revenue disappeared when an inventory of the preserve’s timber resources revealed that few to no trees of commercial value remained after clear- cutting in the mid-twentieth century. Both current and former Trust officials noted that many of the forested areas are more a liability than an asset to the Trust because they are covered with dense vegetation that could fuel large wildland fires. Recreation, too, failed to prosper as expected. The Trust had anticipated holding luxury elk hunts to provide a major source of future revenue and, in 2008, sought state legislation to allow these hunts. The proposal received public criticism, however, and the legislation failed. In addition, the Trust’s several years of experimenting with various approaches to grazing has led to the realization that grazing will not make as much money as anticipated. Obtaining funding for major capital investments to construct and preserve facilities and other infrastructure needed to generate revenues. The 2009 revenue enhancement study commissioned by the Trust estimated that somewhere between $21 million and $53 million would be needed to further develop the facilities and infrastructure to support greater public use of the preserve, such as additional parking lots and further road upgrades, a visitor center, an educational research center, and a visitor lodge. Yet neither the revenues the Trust has generated to date through any of its programs nor current appropriations are sufficient to make such investments. Legal constraints. The Trust faces several legal constraints that may affect its ability to achieve financially self-sustaining operations, according to Trust officials. Provisions of the Preservation Act—specifically, that the Trust expires in 2020 and that it is prohibited from entering into leases lasting longer than 10 years—limit the Trust’s ability to attract concessionaires or other enterprises desiring to establish long-term businesses on the preserve that could generate revenue for the Trust. Another question facing the Trust, according to Trust officials, is what authority it has to borrow and lend money. Trust officials said that Agriculture’s General Counsel told them that the Preservation Act does not specifically address this question. The Trust recently learned it has no authority to borrow money from the Federal Financing Bank, whose purpose is to make loans to government corporations. Trust officials also raised concerns about the Trust’s authority to purchase property outside the preserve or to construct new buildings inside the preserve. In addition, the Trust has expressed concern about not having access to the federal “judgment fund”—a permanent indefinite appropriation available to federal agencies under certain circumstances to satisfy judgments against them—to cover liability incidents such as hunting accidents. According to a Senate committee report on a 2004 bill amending the Preservation Act, the Department of Justice opposed a provision of the bill that would have provided the Trust access to the judgment fund. The Trust is paying over $80,000 annually for liability insurance. Nine years have passed since the federal government purchased Valles Caldera, and 11 years remain before the Valles Caldera Trust could, under the Preservation Act, come under Forest Service jurisdiction if it fails to become financially self-sustaining. The ultimate success of the Valles Caldera land management experiment hinges on the Trust’s ability to become a fully functioning, financially self-sustaining government corporation while simultaneously preserving and protecting the land’s natural, cultural, and recreational values. We acknowledge that achieving such a mission is no easy task, and we recognize that the Trust continues to work toward achieving these goals. Nevertheless, the Trust has struggled for nearly a decade to establish the basic framework for effective management required of government corporations, it has not maintained the pace of progress it set for itself, and it faces a number of legal constraints. Thus, it is uncertain whether the Trust can overcome its management and legal challenges and, as many Board and management officials of the Trust have also noted, whether it can achieve financially self-sustaining status by the Preservation Act’s 2015 deadline. We believe that our previous recommendations, if implemented, could substantially enhance the Trust’s ability to make greater progress toward meeting the goals of the act, as well as to improve management oversight, accountability, and transparency under GCCA and GPRA. We therefore reiterate the need for the Trust to fully implement recommendations from our 2005 report, specifically, continue to develop—and systematically implement—the following elements of effective management: a formal strategic plan that includes measurable goals and objectives; a plan, including planned timelines, for becoming financially self- mechanisms for periodic monitoring and reporting of the Trust’s performance to Congress and other stakeholders. To help further the Trust’s efforts toward becoming a financially self- sustaining government corporation, we recommend that the Trust’s Chairman of the Board and Executive Director work with the relevant congressional committees to seek legislative remedies, as appropriate, for the legal challenges confronting the Trust. We provided the Valles Caldera Board of Trustees with a draft of this report for review and comment. The Board generally agreed with our findings and conclusions but did not comment on our recommendation. In its written comments, the Trust said it found our assessment of its accomplishments to date accurate, although it provided additional details about infrastructure, forestry work, the livestock program, and science and education. In addition, the Board agreed with our finding that the Trust has failed to put in place an effective management program, saying “there is no excuse for these plans and controls to be lacking” and “top priority will be given to reaching prompt compliance with the law.” The Board also noted that we aptly described the current and future challenges the Trust is facing and stated that financial self-sustainment by 2015 is not a possibility under the current provisions of the Preservation Act. Without agreeing or disagreeing with our recommendation that the Trust work with Congress to seek legislative remedies for its legal challenges, the Trust stated that changes to the law are needed. We are sending copies of this report to the Board Chairman, Valles Caldera Trust and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the person named above, David P. Bixler, Assistant Director; Lisa Brownson; Ellen W. Chu; Elizabeth Curda; Richard P. Johnson; Mehrzad Nadji; James M. Rebbe; Dawn Shorey; Jena Sinkfield; and Maria Vargas made key contributions to this report. | In creating the Valles Caldera National Preserve from a unique parcel of land in north-central New Mexico, and by creating the Valles Caldera Trust as a wholly owned government corporation to manage the preserve, the Valles Caldera Preservation Act of 2000 established a 20-year public-private experiment to operate the preserve without continued federal funding. The Trust is charged with achieving a number of goals, including becoming financially self-sustaining by the end of fiscal year 2015. This report, GAO's second and last mandated by the Preservation Act, examines (1) the Trust's progress since 2000; (2) the extent to which the Trust has fulfilled certain of its obligations as a government corporation; and (3) the challenges the Trust faces to achieve the Preservation Act's goals. GAO analyzed documents, financial records, and other Trust information and interviewed current and former members of the Trust's Board and staff, as well as representatives of local interest groups and stakeholders. The Trust has taken steps to establish and implement a number of programs and activities to achieve the goals of the Preservation Act. It has rehabilitated roads, buildings, fences, and other infrastructure; created a science program; experimented with a variety of grazing options; taken steps to manage its forests; expanded recreational opportunities; and taken its first steps toward becoming financially self-sustaining. Nevertheless, it is at least 5 years behind the schedule it set for itself in 2004. According to Trust officials, a number of factors--including high turnover among Board members and key staff and cultural and natural resources and infrastructure that were not as healthy or robust as originally believed--have delayed its progress. Through fiscal year 2009, the Trust had yet to develop and put in place several key elements of an effective management control program for a government corporation. Specifically, the Trust lacked a strategic plan and annual performance plans, and it had not systematically monitored or reported on its progress--elements called for by the Government Performance and Results Act and recommended by GAO in its first report in 2005. The Trust's financial management has also been weak. Consequently, it has been difficult for Congress and the public to understand the Trust's goals and objectives, annual plans and performance, or progress. According to current Trust officials, becoming financially self-sustaining, particularly by the end of fiscal year 2015 when federal appropriations are due to expire, is the Trust's biggest challenge. Most of the Trust's other challenges follow from this one, including identifying, developing, or expanding revenue-generating activities that would enable the Trust to raise sufficient funds; obtaining funds for major capital investments; and addressing a number of legal constraints--such as its authority to enter into long-term leases or acquire property--which potentially limit its ability to attract long-term businesses that could generate revenues. Nevertheless, the Trust is continuing to explore opportunities for becoming financially self-sustaining. |
ESRD occurs when an individual’s kidneys have regressed to less than 10 percent of normal baseline function. Without functioning kidneys, excess wastes and fluids in the body rise to dangerous levels, and certain hormones are no longer produced. Individuals with ESRD must undergo either regular dialysis treatments or receive kidney transplants to survive. As of the end of 2004, of the approximately 480,000 adults with ESRD (those at least 18 years old), just over one-fourth (about 130,000) had functioning kidney transplants and two-thirds (about 330,000) were receiving dialysis treatments. In addition, of the almost 5,700 pediatric individuals with ESRD (those younger than 18 years old), approximately two-thirds (about 3,800) had functioning transplants and less than one- third (about 1,700) were receiving dialysis treatments. A kidney transplant is the preferred method of treatment for individuals with ESRD because it increases an individual’s quality of life and decreases long-term mortality rates compared with lifetime dialysis treatments. Studies have reported that pediatric ESRD patients tend to perform better developmentally with transplants than on dialysis. For example, one study reported improvement in neurological development in infants aged 6-11 months following transplantation. Another study showed that transplantation increased the rate at which pediatric ESRD patients improved on measures of intelligence and mathematical skills. Medicare covers over 80 percent of all individuals with ESRD. For these individuals, Medicare covers the cost of lifetime dialysis treatments, or for individuals who receive kidney transplants, the cost of the transplants and 3 years of follow-up care—including immunosuppressive medications needed to sustain the transplants. Medicare also covers hospital inpatient services and outpatient services, such as physician visits and laboratory tests, as well as medical evaluations provided to living donors and recipients in anticipation of transplants. In addition to Medicare, individuals with ESRD may be covered by other public or private health insurance, such as Medicaid or an employer-sponsored health plan. For individuals who are eligible for Medicare on the basis of ESRD, Medicare is the secondary payer if the individuals have employer-sponsored group health insurance coverage during the first 30 months of Medicare coverage. After the first 30 months, Medicare becomes the primary payer for these beneficiaries until they are no longer entitled to Medicare. For an individual who is eligible for Medicare solely because of ESRD and who has a kidney transplant, Medicare coverage ends on the last day of the 36th month after the individual receives the transplant unless the individual is entitled to Medicare other than because of ESRD. However, after 36 months, a transplant recipient can become eligible for Medicare again after a transplant failure and subsequently receive a retransplant or dialysis. Following termination of Medicare coverage, individuals who are unable to pay for immunosuppressive medications and other transplant-related follow-up care must rely on other public or private health insurance or charity care. Pediatric recipients have several potential sources of coverage when their Medicare coverage ends: private health insurance— generally, a parent’s employer-sponsored coverage; Medicaid; the State Children’s Health Insurance Program (SCHIP); and charity care. However, once individuals turn 19, they may lose access to their parents’ private insurance coverage as well as coverage under SCHIP and Medicaid. Individuals who receive kidney transplants require immunosuppressive therapy—usually a combination of at least two different immunosuppressive medications—as well as regular laboratory tests to monitor and maintain their transplants. Although the frequency of laboratory tests decreases over time, the need for immunosuppressive medications continues for the life of the transplant. Recipients who do not take their immunosuppressive medications according to the prescribed regimens are more likely to have their transplanted kidneys fail. Studies have shown that not only does medication noncompliance cause 13 to 35 percent of transplants to fail, one of the studies indicated that it also causes recipients to die at rates fourfold greater than compliant recipients. One recent study showed that about 23 percent of recipients with failed transplants who returned to dialysis died within 2 years. Several studies have reported that there are a number of reasons why some transplant recipients do not comply with their medication regimens. More specifically, one study reported that adverse side effects of the medications, difficulty following complex treatment regimens that involve several drugs and varying schedules of dosing, and an inability to pay for medications due to a lack of health insurance coverage, among other reasons, can contribute to medication noncompliance. Other studies have reported that medication noncompliance can be unpredictable, often without an identifiable reason. Studies have also shown that adolescent recipients are especially prone to medication noncompliance or partial compliance. For example, one study showed that for individuals aged 12 to 19 years, dissatisfaction with body image and the physical side effects of medications have been linked to poor compliance with prescribed transplant medication regimens. Another study found that 57 percent of participating recipients under 20 years old were not compliant with their medication regimens, compared with only 15 percent of participants over 40 years old. Pediatric, transitional, and adult kidney transplant recipients were similar with respect to sex, race, and income level. As of December 31, 2004, all three age groups were predominately male, white, and lived in counties with a median annual household income of $25,000 to less than $50,000. However, the three groups differed in terms of their types of health insurance coverage, with a smaller percentage of pediatric and transitional recipients covered by Medicare compared to their adult counterparts. Based on our analyses of USRDS and ARF data, we found that pediatric, transitional, and adult recipients were similar with respect to sex, race, and income level, as of December 31, 2004. All three age groups were predominantly male, and the proportion of males in each age group was higher than that found in the general U.S. population—49 percent (see table 1). Approximately 59 percent of individuals with ESRD are male. All three age groups were also predominantly white, and the percentage distribution of other races among the three groups was similar (see table 2). Although a higher percentage of transitional recipients were white and a lower percentage were black compared with pediatric and adult recipients, the differences were not substantial. In addition, the distribution of racial groups among pediatric, transitional, and adult transplant recipients was similar to that found in the general U.S. population. Pediatric, transitional, and adult transplant recipients were similar in terms of their household income level (see table 3). Seventy-five percent of recipients in each age group resided in counties with a median annual household income of $25,000 to less than $50,000, which is almost three times the percentage for the general U.S. population (27 percent). When compared to the general U.S. population, a very small percentage of recipients in each of the three age groups resided in counties with the lowest and highest median annual household incomes—less than $25,000 or $75,000 or more, respectively. About 27 percent of the U.S. population resided in counties with a median annual household income of less than $25,000 and about 28 percent resided in counties with a median annual household income of $75,000 or more. While pediatric, transitional, and adult transplant recipients were similar in terms of sex, race, and income, they were less similar in terms of their health insurance coverage. As of December 31, 2004, while more than two-thirds of adult recipients had coverage under Medicare, just over one- third of pediatric recipients and slightly less than half of transitional recipients were covered under Medicare (see table 4). Although each group had about the same percentage of recipients with both Medicare and Medicaid coverage, almost three times as many adult recipients had Medicare but not Medicaid coverage compared with pediatric recipients, and almost twice as many adult recipients had Medicare but not Medicaid coverage compared with transitional recipients. Although still smaller than the percentage of adult recipients, based on our analysis of USRDS data, a larger percentage of pediatric and transitional recipients had Medicare coverage at the time of their transplants— 67 percent and 81 percent, respectively, compared to 87 percent. It is not known why these differences in Medicare coverage existed, given that most individuals who have ESRD are eligible for Medicare coverage. Our analysis of data from the USRDS show that after the first year posttransplant, a higher percentage of transitional recipients experienced a transplant failure compared with their pediatric and adult counterparts. In addition, the largest increase in transplant failure among the three age groups occurred in the first 3 years posttransplant—before termination of Medicare coverage—and the increase was substantially higher for transitional recipients than for pediatric and adult recipients. After experiencing a transplant failure, a higher percentage of transitional recipients received dialysis, a higher percentage of pediatric recipients received retransplants after the first year posttransplant, and a higher percentage of adult recipients died. Based on our analysis of USRDS data, we found that after the first year posttransplant, a higher percentage of transitional recipients experienced a transplant failure when compared with their pediatric and adult counterparts (see fig. 1). For example, we found that by 5 years posttransplant, the percentage of transitional recipients who experienced a transplant failure (33 percent) was about twice as high as the percentage of pediatric recipients (16 percent) and somewhat higher than adult recipients (28 percent). According to several representatives of pediatric kidney transplant centers that we interviewed, adolescent kidney transplant recipients—who generally populate our transitional age group—are less likely than other age groups to comply with their medication regimens, which, among other things, can lead to transplant failure. The largest increase in the percentage of transitional recipients who experienced a transplant failure occurred in the first 3 years posttransplant, and this increase was substantially higher than the increase for pediatric and adult recipients. Specifically, the percentage of failures for transitional recipients increased by 133 percent between 1 and 3 years posttransplant, while the percentage increases for pediatric and adult recipients were 83 and 100 percent, respectively. After 3 years posttransplant, all three age groups showed a smaller increase in transplant failures when compared with the period between 1 and 3 years posttransplant. Between 3 and 5 years posttransplant, the percentage increase in transplant failures was 45 percent for pediatric, 57 percent for transitional, and 56 percent for adult recipients. The percentage increase in failures remained lower during the 5 to 7 years posttransplant period—63 percent, 33 percent, and 43 percent for pediatric, transitional, and adult recipients, respectively. Failure to see a large percentage increase of transplant failures in pediatric and transitional recipients beyond 3 years posttransplant, when Medicare coverage terminates for many recipients, may be explained by the practices of transplant centers. Representatives from pediatric kidney transplant centers with whom we spoke stated that once Medicare coverage ends, they either help recipients to acquire other health insurance coverage or provide them with free or reduced-cost immunosuppressive medications if they lack health insurance coverage or otherwise cannot afford the medications. They also stated that the percentage of recipients who experience transplant failures because of an inability to pay for their medications after Medicare coverage ends (3 years posttransplant) is low. Based on our analysis of USRDS data, we found that after experiencing transplant failures, a higher percentage of transitional recipients received dialysis, a higher percentage of pediatric recipients received retransplants after the first year posttransplant, and a higher percentage of adult recipients died (see figs. 2, 3, and 4). By 7 years posttransplant, the percentage of transitional recipients who received dialysis after experiencing a transplant failure was nearly 30 percent higher than that of pediatric recipients and nearly 60 percent higher than that of adult recipients. In addition, at 7 years posttransplant, the percentage of pediatric recipients who received retransplants after experiencing a transplant failure was over 25 percent higher than that of transitional recipients and more than twice the percentage of adults who received retransplants. The percentage of adults who died following a transplant failure was about twice as high as the percentage of pediatric recipients and about three times as high as transitional recipients. Based on our analysis of USRDS data, we found that recipients who had both Medicare and Medicaid coverage experienced a higher percentage of transplant failures compared with those who had Medicare but not Medicaid coverage or were in the Other category (see fig. 5). By 7 years posttransplant, the percentage of recipients covered by both Medicare and Medicaid who experienced a transplant failure was slightly higher (24 percent) than recipients covered by Medicare but not Medicaid and was more than three times as high as the percentage of recipients in the Other category. After experiencing a transplant failure, a higher percentage of recipients who had both Medicare and Medicaid coverage received dialysis when compared with recipients who had Medicare but not Medicaid coverage or were in the Other category (see fig. 6). For example, by 7 years posttransplant, the percentage of recipients covered by both Medicare and Medicaid who received dialysis after experiencing a transplant failure was about 70 percent higher than recipients in the Other category. After the first year posttransplant, the percentage of recipients covered by both Medicare and Medicaid who received dialysis after a transplant failure was substantially higher than the percentage for recipients in the Other category. Based on our analysis of USRDS data, we found that Medicare beneficiaries with functioning transplants cost substantially less per year to treat than those beneficiaries who experienced transplant failures. Specifically, we found that overall, the median annual Medicare cost for a beneficiary with a functioning transplant was $8,550, compared with a median annual Medicare cost of $50,938 for a beneficiary after a transplant failure—a difference of 500 percent. For pediatric beneficiaries, the percentage difference was even higher—the median annual Medicare cost after a transplant failure was 750 percent higher than for a functioning transplant (see table 5). The differences for transitional and adult beneficiaries were 550 percent and 500 percent, respectively. The substantial cost of treating transplant recipients who experience transplant failures underscores the importance of maintaining functioning kidney transplants. While there are many reasons that could account for transplant failures during the first 3 years posttransplant—including medication noncompliance—the large percentage increase in transplant failures from 1 year to 3 years posttransplant for transitional recipients cannot be attributed to an inability to access immunosuppressive medications due to a lack of Medicare coverage. In commenting on a draft of this report, CMS stated that it appreciated our interest in kidney transplant patients and in the cost of care provided to those receiving transplants or dialysis. CMS stated that it was concerned about the quality of care and the outcomes experienced by Medicare beneficiaries, including the higher rate of transplant failure among transitional patients. CMS also stated that educating beneficiaries with kidney failure is critical to improving beneficiaries’ ability to actively participate in and make informed decisions about their care. As a result, the agency engages in numerous educational and outreach efforts targeted to beneficiaries, providers, and national organizations that represent renal patients. CMS’s comments are reprinted in appendix I. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to the Secretary of HHS and to other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. We will also make copies available to others upon request. If you or your staff have any questions about this report, please call me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Nancy Edwards, Assistant Director; Kelly DeMots; Krister Friday; Joanna Hiatt; Xiaoyi Huang; Martha Kelly; and Ba Lin made key contributions to this report. | For individuals with end-stage renal disease (ESRD), the permanent loss of kidney function, Medicare covers kidney transplants and 36 months of follow-up care. Kidney transplant recipients must take costly medications to avoid transplant failure. Unless a transplant recipient is eligible for Medicare other than on the basis of ESRD, Medicare coverage, including that for medications, ends 36 months posttransplant. Pediatric transplant recipients, including those who were under 18 when transplanted but are now adults (transitional recipients), may be more likely than their adult counterparts to lose access to medications once Medicare coverage ends because they may lack access to other health insurance coverage. GAO was asked to examine (1) the percentage of transplant failures and subsequent outcomes--retransplant, dialysis, or death--among pediatric, transitional, and adult kidney transplant recipients and (2) how the cost to Medicare for a beneficiary with a functioning transplant compares with the cost for a beneficiary with a transplant failure. To do this, GAO analyzed 1997 through 2004 data from the United States Renal Data System (USRDS) and interviewed officials from pediatric transplant centers. The Centers for Medicare & Medicaid Services--the agency that administers Medicare--commented that it is concerned about beneficiary outcomes and has an education program to help them. The percentage of kidney transplant recipients who experience a transplant failure varies by age group as do the percentages who experience dialysis, retransplant, or death. After the first year posttransplant, a higher percentage of transitional recipients (those younger than 18 at the time of their transplants and at least 18 as of December 31, 2004) experienced a transplant failure and subsequently received dialysis compared with their pediatric (those younger than 18 as of December 31, 2004) and adult (those at least 18 at the time of their transplants) counterparts. By 5 years posttransplant, the percentage of transitional recipients who experienced a transplant failure (33 percent) was about twice as high as pediatric recipients (16 percent) and somewhat higher than adult recipients (28 percent). The largest increase in transplant failures for each age group occurred in the first 3 years posttransplant--before the termination of Medicare coverage on the basis of ESRD--and the increase was substantially higher for transitional recipients (133 percent) than for pediatric (83 percent) and adult (100 percent) recipients. Medicare beneficiaries with functioning transplants cost substantially less per year to treat than those who experienced a transplant failure. GAO found that the median annual Medicare cost for a beneficiary whose transplant failed ($50,938) was 500 percent more than the median annual Medicare cost for a beneficiary with a functioning transplant ($8,550). This percentage difference was consistent across transplant recipient age groups. The substantial cost of treating transplant recipients who experience a transplant failure underscores the importance of maintaining functioning kidney transplants. While there are many reasons that could account for transplant failures, the large percentage increase in transplant failures from 1 year to 3 years posttransplant for transitional recipients cannot be attributed to an inability to access medications due to a lack of Medicare coverage. |
DHS components operate holding facilities at various locations nationwide. Border Patrol has approximately 203 holding facilities that are located at stations, checkpoints and forward operating bases. OFO has approximately 129 holding facilities located at land POEs. ICE has 137 holding facilities that are located at ERO field offices and sub-offices. The reasons why individuals are taken into short-term custody vary by component. Border Patrol apprehends aliens along the land borders and between POEs due to suspected criminal activity or violations of U.S. immigration law, such as illegal entry into the United States or presence in the country without lawful immigration status, and transports them to Border Patrol holding facilities, where they undergo processing before being removed, released, or transferred to ICE for long-term detention, among other scenarios. OFO inspects all arriving persons to the United States to determine their citizenship or nationality, immigration status, and admissibility. This inspection can lead to persons being taken into temporary custody at POE holding facilities while awaiting repatriation to a foreign country; transfer or referral to another agency, such as ICE; or completion of inspection and associated processing. ICE takes aliens into custody upon their release from jails and prisons through the Criminal Alien Program and other efforts or apprehends aliens for various reasons, including through the National Fugitive Operations Program, and transports them to ICE holding facilities. During fiscal years 2014 and 2015, Border Patrol apprehended 823,768 aliens and held them temporarily in holding facilities. Approximately 98 percent of aliens apprehended by Border Patrol during those fiscal years were apprehended along the southwest border of the United States with Mexico. Of the 810,704 aliens whom Border Patrol apprehended along the southwest border, about 49 percent were apprehended in the Rio Grande Valley sector in Texas. Figure 1 shows the locations of aliens apprehended by Border Patrol along the southwest border during fiscal years 2014 through 2015. DHS officials at holding facilities conduct a number of activities in managing the short-term custody of aliens, including (1) processing, (2) care, and (3) monitoring. Processing. During processing, holding facility officials gather and record information from aliens. Specifically, holding facility officials collect and record information on aliens in agency databases; take fingerprints, if applicable; conduct records checks; and collect and maintain personal belongings. Holding facility personnel typically conduct these processing activities in a general area outside of the actual holding cells. Care. Once processing is complete, holding facility officials typically place individuals in a secure holding cell or room and provide them with various types of care, including meals and water, restrooms, hygienic supplies and medical care. Holding facilities maintain written or electronic custody logs to document care provided to individuals. Telephone access varies by holding facility; some facilities include a telephone in the holding cells, while other facilities maintain a telephone only in the processing area. Holding facilities and the conditions of confinement may vary by component, among other factors. For example, while all three components maintain secure cells, OFO sometimes places individuals in general waiting areas at POEs based on a risk assessment of individuals and facility space limitations. In addition, holding facility officials typically segment the population based on age, gender and other characteristics, such as risk. Figure 2 depicts a typical cell at DHS holding facilities. Monitoring. Holding facility officials monitor holding facilities primarily through video cameras and physical checks to help ensure that cells are kept clear of contraband and other potentially dangerous materials. For example, holding facility officials might conduct physical checks at various intervals throughout the day, which are designed for a number of purposes, including overseeing individuals in short-term custody, providing a deterrent for misconduct, and affording individuals the opportunity to communicate potential issues regarding their health or safety. Holding facility personnel may conduct more frequent monitoring activities for high risk individuals who might show signs of distress, hostility, or other unusual behavior. CBP and ICE have issued standards for the short-term custody of aliens that apply to their holding facilities nationwide. For example, CBP has established minimum standards that apply to both Border Patrol and OFO holding facilities, and each component also maintains a holding facility policy. In addition, ICE has a policy that governs the operation of ERO holding facilities. The Border Patrol, OFO, and ICE standards contain common requirements for holding facilities, including: Limiting aliens’ total time in custody: Process and then transfer, remove, or release aliens as soon as is appropriate and operationally feasible. Conducting periodic physical checks of holding cells: Monitor holding cells directly and regularly when individuals are in custody. Maintaining a detention log: Collect and preserve in written or electronic form general information from all individuals. Providing various accommodations to individuals: Offer meals and snacks at specified intervals, as well as access to drinking water and restrooms at all times to individuals. Safeguarding of individuals’ personal property while in custody: Collect, inventory, and safeguard funds, valuables, and baggage, and other personal property. Within these common elements, however, the Border Patrol, OFO and ICE standards for the short-term custody of aliens vary. For example, the Border Patrol and ICE holding facility policies state that, whenever possible, an individual should not be held for more than 12 hours while the OFO holding facility policy states that the detention of a person in a holding facility at POEs shall be for the “least amount of time necessary” to complete processing but generally less than 24 hours. The policies also vary with respect to conducting physical checks of cells. The OFO and ICE holding facility policies require that personnel conduct physical checks of individuals placed inside of holding cells at least every 15 minutes, while the Border Patrol policy states that personnel must physically check holding cells on a regular basis for all individuals and every 15 minutes for individuals deemed to be high risk (e.g., an individual exhibiting unusual behavior such as signs of distress). Border Patrol, OFO, and ICE holding facilities may also use local standard operating procedures to augment agency standards for holding facilities. For example, an ICE holding facility that we visited maintained four local standard operating procedures on areas such as controlling and safeguarding personal property. Similarly, we learned during our site visits that other Border Patrol, OFO, and ICE holding facilities have local standard operating procedures. Agencies have also established processes for monitoring holding facilities for compliance with standards. Within CBP, the Management Inspections Division and designated officials from Border Patrol and OFO headquarters manage the annual Self-Inspection Program (SIP), which is designed to assess internal controls in all CBP operations, including holding facilities. The SIP varies from year-to-year and typically incorporates elements of holding facility policies. For example, the 2015 Border Patrol SIP covered the extent to which holding facilities maintained detention logs on aliens, including meal service, medical care, and other pertinent information. Border Patrol and OFO holding facilities reported in the SIP results for 2015 that they were generally compliant with holding facility standards. Besides the SIP, Border Patrol and OFO officials told us that holding facilities monitor compliance with holding facility standards through daily activities. We learned from discussions with agency officials and observations during our site visits that these daily activities include: maintaining continuous surveillance of individuals through video cameras, conducting periodic physical checks of individuals, and having shifts overlap to allow independent personnel to review and verify actions taken to care for individuals. Moreover, regional Border Patrol sectors or OFO field offices may also undertake monitoring activities. For example, a senior OFO official stated that his field office conducts periodic spot-checks of holding facilities in its jurisdiction to verify compliance with holding facility standards. Within ICE, holding facilities monitor compliance through daily activities. According to ICE officials, some of these daily activities include maintaining continuous surveillance of individuals through video cameras, as well as conducting periodic and end of day physical checks of detainees. During our site visits to ICE holding facilities, we also observed video cameras and written logbooks that notated the date and time ICE personnel inspected individual holding cells. In addition, ICE headquarters is currently developing a holding facility self-assessment tool intended to capture the level of compliance in ICE holding facilities nationwide with ICE’s holding facility policy, such as standards related to providing meals and water and managing personal property. ICE headquarters provided us with the draft self-assessment tool and a project plan for its completion. ICE is currently in the final stages of reviewing the tool and expects to begin using it by June 2016. CBP and ICE do not have a process or processes in place to fully assess their time in custody data, including the quality of the data, and the extent to which holding facilities are adhering to agency standards for time in custody and the factors affecting the length. CBP and ICE maintain data systems that record information on various elements for short-term custody of individuals, including time in custody. Time in custody represents the time between when an individual is “booked in” and “booked out” of a holding facility. CBP and ICE holding facility policies include standards for the number of hours that an individual should be held in short-term custody. According to agency officials, these standards are in place because holding facilities are not designed to hold individuals for long periods of time and thus generally do not have features such as beds and showers. However, based on our review of time in custody data, Border Patrol and ICE do not have a process to completely assess time in custody data and OFO has only recently initiated efforts to collect such data. Border Patrol has taken some steps to monitor time in custody data. For example, we learned during our site visits that a Border Patrol sector in Texas generates a detention dashboard report that tracks aliens’ total time in custody by station and a Border Patrol sector in Florida disseminates a regular report to stations with time in custody information on individuals. In addition, Border Patrol incorporated time in custody as an inspection item for individual stations in the 2015 SIP. Further, Border Patrol officials responsible for managing the agency’s data told us that they address individual irregularities in time in custody data as they discover them, such as incomplete or duplicative information. However, these Border Patrol headquarters officials told us that the agency is currently only tracking and producing regular reports for Border Patrol leadership about the time in custody for unaccompanied alien children and families but not for the rest of the detainee population. Border Patrol headquarters officials responsible for overseeing holding facilities told us that Border Patrol has not directed an entity at the headquarters level to assess time in custody data for all types of aliens in Border Patrol’s custody. ICE produces various reports that include time in custody data for both detention and holding facilities; however, the information in these reports is limited. Specifically, the reports do not include total hours in custody by alien, despite ICE’s holding facility policy specifying that individuals should not be held for longer than 12 hours absent exceptional circumstances. ICE headquarters officials indicated that ICE does not use time in custody data to monitor holding facilities, and ICE officials in the field stated that the agency is more focused on monitoring longer term detention facilities. OFO maintains fields in its automated database to track aliens’ time in custody in holding facilities, although, according to OFO officials, most land POEs have not been consistently recording that information, with the exception of seven land POEs in California and Texas. We previously reported in July 2015 that OFO did not yet have a policy requiring officers to use an automated database to record care provided to unaccompanied alien children, including book out dates and times, and most POEs did not use an automated database to track custody care actions. In November 2015, OFO began piloting the mandatory automated collection of time in custody data for holding facilities at selected POEs. According to OFO headquarters officials, OFO expects to expand the pilot program to all POEs in calendar year 2016. While Border Patrol and ICE produce some reports with time in custody data, more fully monitoring time in custody data could allow the agencies to identify potential trends and differences across all field locations. Specifically, Border Patrol and ICE could better understand (1) the quality of time in custody data, such as determining the sources of irregularities and uncovering missing or inaccurate data, and (2) the extent to which holding facilities are adhering to agency standards for time in custody and the factors affecting the overall length. Determining quality of time in custody data. Our discussions with Border Patrol officials and analysis of Border Patrol’s time in custody data for fiscal years 2014 to 2015 raised questions about the quality of the data. Specifically, we could not determine the reliability of Border Patrol’s time in custody data for two reasons. First, Border Patrol expanded use of its e3 system to capture additional custody care information, including time in custody, and we identified challenges associated with entry of these data by Border Patrol personnel. According to Border Patrol officials, since fiscal year 2014 was the first full year that the agency collected electronic time in custody data across all facilities nationwide, agents in the field are still learning how to accurately and consistently record time in custody information. Second, we analyzed the Border Patrol’s fiscal years 2014 and 2015 time in custody data from the e3 system and found irregularities, such as individuals with multiple months or negative hours in custody. Officials from Border Patrol headquarters could not fully explain the irregularities we found. Specifically, they told us that lengthy times in custody might result from officials in the field not temporarily booking out aliens for a hospital visit or court appearance or failing to record book-out dates and times in a timely manner. We also identified issues involving the recording of book-out information during our site visits. For example, Border Patrol holding facility officials in three locations stated that agents forgetting to input book-out times into the e3 system could explain some lengthy times in custody. In particular, a Patrol Agent in Charge at one Border Patrol holding facility stated that fiscal year 2014 data for his facility were likely inaccurate because there may have been delays, in some cases spanning weeks or months, in agents recording aliens’ book-out times. While the data indicated that the facility held over 100 aliens for more than 24 hours, the official stated that it is extremely rare to hold aliens for 24 hours since the facility’s operating hours are limited. Officials from Border Patrol headquarters said that they would not know if lengthy times in custody were either accurate or inaccurate due to data issues unless Border Patrol personnel conducted a more comprehensive analysis. Because of these reasons, we were unable to determine the reliability of Border Patrol’s time in custody data. Further, we could not assess the reliability of data on the number of hours individuals were in ICE holding facilities because ICE includes days, but not hours, in the year-end reports it produces with time in custody data. ICE provided a report for us with the time in custody by individual; however, this information was limited to number of days in custody, and for many individuals, the report included a “zero” for the overall duration. Based on findings from our audit work, an ICE data management official told us that ICE would include “book-in time” and “book-out time” in fiscal year 2016 reports to allow the reporting of hours in custody by individual. This is a positive step, which should help strengthen ICE’s monitoring of time in custody data. However, this action does not address other reliability concerns we identified with ICE’s time in custody data. Specifically, our analysis of ICE’s data showed that one ICE holding facility was not electronically recording time in custody data in ENFORCE and another ICE holding facility was recording aliens in ENFORCE that were never in ICE’s custody; rather the aliens were in custody at a Border Patrol holding facility where ICE ERO contributes resources. In response to the issues, ICE officials stated that, in 2016, these locations plan to modify standard operating procedures to strengthen the completeness or accuracy of their data in ENFORCE. While these are positive steps, without a process to fully monitor time in custody data, ICE is not positioned to address data reliability issues in a systematic manner. Determining level of compliance with agency guidelines and factors affecting time in custody. While Border Patrol and ICE maintain specific guidelines regarding time in custody for individuals in short-term holding facilities, these agencies could better understand the level of compliance with the guidelines and factors impacting time in custody. Agency officials expressed concerns about individuals’ time in custody at holding facilities. For example, Border Patrol officials in nine holding facilities we visited in California, Florida, and Texas told us that Border Patrol’s twelve-hour guideline for time in custody is sometimes challenging to meet. They stated that a variety of factors could extend time in custody for individuals at holding facilities. First, Border Patrol sometimes has to process a large group of individuals simultaneously, such as when Border Patrol agents encounter a possible smuggling operation. Second, Border Patrol holding facilities may experience delays in transferring individuals to ICE custody due to ICE not having detention capacity and based on ICE offices not operating 24 hours/7 days per week to accommodate Border Patrol transfer requests. Third, Border Patrol may need to seek treatment for individuals with medical issues (e.g., dehydration, sprained ankle) prior to transferring them to ICE. An analysis of time in custody data would help Border Patrol understand the extent to which these factors might be impacting the agency’s level of compliance with guidelines or whether data issues are skewing the numbers. For example, although Border Patrol officials from 10 holding facilities we visited stated that time in custody rarely exceeds 72 hours, we noted that approximately 16 percent of cases with complete data in fiscal years 2014 and 2015 exceeded this threshold. Border Patrol and ICE provided various reasons why they are not more fully assessing time in custody data. For example, Border Patrol headquarters officials told us that they prioritize assessing time in custody data for unaccompanied alien children and families due to the legal requirements related to those populations, but that they generally rely on the field to monitor time in custody data for the rest of the population in short-term holding facilities. An ICE data management official told us that the agency has the ability to report on hours in custody for aliens in holding facilities but has not done so in the past because ICE headquarters or external stakeholders have not been interested in these data. Additionally, an ICE headquarters official responsible for overseeing holding facilities stated that ICE does not currently use time in custody data to monitor holding facilities because the duration of custody is short and ICE field offices understand how to appropriately monitor time in custody. However, by not fully monitoring time in custody, agency officials at the headquarters level do not have visibility into data across holding facilities. Standards for Internal Control in the Federal Government recommend that entities process obtained data into quality information to support the internal control system and to achieve its objectives. The Standards also recommend that management entities should establish and operate monitoring activities to monitor the internal control system and evaluate the results. By developing and implementing a process to assess time in custody data, Border Patrol and ICE would have reasonable assurance about the quality of the data and the level of compliance with their own standards for time in custody. Border Patrol and ICE could also better understand the factors affecting time in custody in holding facilities. For example, Border Patrol would be better able to determine more accurately the extent to which agents at holding facilities may not be entering temporary or permanent book-out information for individuals and thus inadvertently increasing time in custody data. Additionally, Border Patrol would be better positioned to assess the impact that operational considerations, such as challenges in processing a large group of individuals simultaneously or coordinating transfers with ICE, have on time in custody at holding facilities nationwide. Similarly, ICE would be better positioned to assess the actual hours in custody for individuals in custody at its holding facilities to ensure that they are meeting the guidelines in its holding facility policy. DHS and its components have multiple mechanisms at the holding facility and headquarters levels to obtain and address individuals’ complaints regarding CBP and ICE holding facilities or personnel. The types of complaints submitted through these mechanisms could relate to such issues as: (1) conditions of confinement, including the temperature of the hold rooms, the amount of noise or light in the facility, or the quality of the food; and (2) employee misconduct, including alleged use of force or verbal abuse by Border Patrol, OFO, and ICE employees. DHS provides individuals in short-term holding facilities the opportunity to submit their complaints directly to CBP or ICE officials at the local holding facility. DHS headquarters and holding facility officials we spoke with told us that generally it is DHS’s practice to address complaints immediately and at the lowest level possible through oral communication with Border Patrol, OFO, and ICE facility staff. Generally an individual would submit a complaint to a supervisor at a holding facility, who would try to resolve the complaint as quickly as possible, especially if a complaint related to the conditions of confinement. For example, according to CBP and ICE officials responsible for holding facilities, individuals make complaints, such as being cold or hungry and request that officers provide them a blanket or food. Officers will attempt to resolve such complaints as quickly as possible by supplying a blanket or providing a meal or a snack. In addition to making complaints directly to officials at holding facilities, individuals in holding facilities can submit complaints through various mechanisms at the DHS or component headquarters level. These mechanisms include: (1) DHS Office of Inspector General (OIG); (2) DHS Office for Civil Rights and Civil Liberties (CRCL); (3) CBP INFO Center; (4) ICE Detention Reporting and Information Line (DRIL); and (5) Joint Intake Center (JIC). Complaints can be submitted by telephone, e-mail, mail, or fax. For example, the DHS OIG operates a toll free hotline to receive complaints. Each of the five complaint mechanisms has a different purpose and is designed to address different issues, including alleged violations of civil rights and civil liberties and other types of grievances. According to DHS officials, complaints can be reported through any of these different mechanisms and the same complaint may be reported through multiple mechanisms. For example, according to an ICE official, the same complaint may be submitted to DHS OIG, DHS CRCL, and the JIC; however, only one investigation into the complaint may be conducted. Table 1 summarizes the different DHS mechanisms through which individuals can submit complaints, including the responsible DHS entity and the purpose of each mechanism. While DHS and its components make information publicly available on the various complaint mechanisms, they have not consistently communicated information to individuals in CBP and ICE holding facilities on mechanisms that are available for them to submit a complaint. DHS primarily advertises available complaint mechanisms through organizational websites. For example, ICE ERO includes information on its website advertising the DRIL, and the CBP website communicates information regarding the CBP INFO Center. In addition, DHS CRCL has a public guide available on its website listing the various DHS complaint mechanisms; however, this information is not consistently communicated in holding facilities. During our visits to Border Patrol, OFO, and ICE holding facilities we observed that the posters used to communicate DHS complaint mechanisms varied in their coverage. For example, while all 32 ICE and CBP holding facilities we visited included at least one poster on how to file a complaint with the DHS OIG or a component involving a potential incident of sexual abuse or assault related to the Prison Rape Elimination Act (PREA), the facilities differed in the extent to which they communicated how to submit a non-PREA complaint through DHS complaint mechanisms. Specifically: ICE. We observed that most ICE holding facilities (six out of eight) posted information on how individuals can contact the DHS OIG to file non-PREA complaints, while half of the facilities (4 out of 8) posted information on the DRIL and a couple of facilities (2 out of 8) posted information on the JIC. Half of the ICE holding facilities (4 out of 8) included a “speed dial” poster with phone numbers on external resources (e.g., “Mexican Consulate” or “Joint Intake Center”); however, the posters do not provide any information on these resources, including their purpose. Border Patrol. We observed that 4 of 17 Border Patrol holding facilities posted information on how individuals can contact the DHS OIG to file general complaints, but the remaining facilities did not have information posted on any complaint mechanisms, such as the JIC or CBP INFO Center. OFO. We observed that one OFO holding facility (one out of six) posted information on both the DHS OIG and CBP INFO Center; however, at the remaining facilities we did not see posted information on any reporting mechanisms. Figure 3 shows an example of a PREA poster that we observed in a Border Patrol holding facility. DHS components have undertaken some efforts to review complaints- related signage in holding facilities. For example, in 2015, the CBP Commissioner’s Office directed OFO and Border Patrol to review complaints-related signage in holding facilities because of concerns that it might be outdated and not consistently in place. As part of that effort, the components issued guidance to the field; however, the guidance has been limited to ensuring PREA posters are in place and removing outdated signage. For example, the Border Patrol instructed all Border Patrol sectors to ensure that signage in holding facilities complied with PREA standards and remove any outdated signage related to the former Immigration and Naturalization Service. Similarly, OFO instructed holding facilities to ensure that proper signage, such as PREA information, is displayed in the detention areas. While OFO and Border Patrol took steps to evaluate complaints-related signage in holding facilities and clarify that PREA posters should be in place, they have not provided guidance to the field concerning how and which complaint mechanisms should be communicated to individuals in the holding facilities. According to DHS headquarters and holding facility officials, DHS has not placed an emphasis on which complaint mechanisms should be communicated because individuals are encouraged to submit complaints to holding facility personnel and an individual may be more likely to submit a complaint while in longer-term detention. For example, according to a Border Patrol official in the field, the majority of all complaints made by individuals are not made while they are in Border Patrol custody, but rather after they have been transferred to ICE custody in a detention facility. However, individuals in holding facilities may have a concern that they do not communicate because they may not be aware of the available complaint mechanisms. In addition, while CBP and ICE may encourage individuals to submit complaints to holding facility personnel rather than through external complaint mechanisms, individuals who need to file a complaint may not necessarily: (1) be able to get their complaint addressed at the field level, or (2) feel comfortable lodging a complaint to a local official, such as due to fear of retribution. An ICE holding facility official shared this view, stating that it is important to inform individuals of external mechanisms like the DRIL since they may not be comfortable making a complaint locally or the local office may not properly resolve the issue. Furthermore, agency officials and advocacy organizations have expressed concerns about the transparency of DHS’s processes for obtaining holding facility complaints. For example, during our review, both headquarters and field officials within the components stated that individuals may not understand the different avenues to file a complaint. In addition, in 2014, an advocacy organization expressed written concerns to DHS, noting that there are many different complaint mechanisms in place and the public is confused about where and how to submit complaints related to holding facilities. Standards for Internal Control in the Federal Government state that management should document each unit’s responsibilities through policy to allow management to effectively monitor the control activity. The standards also state that management should communicate quality information down and across reporting lines to enable personnel to perform key roles in achieving objectives, addressing risks, and supporting the internal control system. By providing guidance to the field that specifies how and which complaint mechanisms should be communicated in holding facilities, Border Patrol, OFO, and ICE could better ensure that individuals have full recourse to the mechanisms available to them should they need to file a complaint about facility conditions, misconduct, abuse, or other issues. Most of the complaint tracking systems that DHS and its components employ do not have classification codes for holding facilities that would allow agencies to readily identify which complaints are related to holding facilities and to analyze these complaints for potential trends. DHS OIG, CRCL, CBP INFO Center, DRIL, and JIC maintain tracking systems for complaints; however, information on holding facilities is typically subsumed within a narrative field. In order to better understand the capabilities of DHS complaint tracking systems, we gathered data from JICMS because it contains information on both ICE and CBP complaints that may have originated from holding facilities. In reviewing the JICMS data, we found that it does not include a facility, facility type, or issue code related to holding facilities that would allow users to readily identify the universe of complaints involving holding facilities. Rather, we found that information identifying whether a complaint involved a holding facility may be located within narrative fields. We searched the database using potentially relevant issue codes and terms that would potentially uncover complaints in the narrative field, however, it was not always clear, even when reviewing the narrative field, whether a complaint was related to a holding facility. For example, we identified complaints that alleged an individual’s money was not returned to him or an individual was injured due to potential use of force. However it was unclear if these complaints related to ICE holding facilities or detention facilities. Upon our request, the CBP Office of Internal Affairs produced a report from JICMS showing potential holding facility complaints; however, officials from that office noted that it was a time consuming and labor intensive process, and that the report would not necessarily account for all holding facility complaints. According to DHS officials, with the exception of CRCL’s database, the complaint tracking systems for the other mechanisms DHS OIG, DRIL, and CBP INFO Center—present similar limitations. Similarly, in February 2016, we reported that CRCL, DRIL, and JIC maintain medical-related complaint data in their respective tracking systems; however, the data, in most cases, is not tracked or analyzed for trending purposes. Specifically, we found that while DHS provides various avenues for detainees to file medical care complaints related to immigration detention, DHS does not have a mechanism to readily determine the overall volume of medical-related complaints it receives, their status, or outcome. DHS and component officials stated that there are not many complaints related to holding facilities, so they have not prioritized creating a specific classification code for holding facilities or conducted trend analyses on complaints related to holding facilities. However, without creating a classification code for holding facilities and conducting trend analysis, DHS does not have a way of knowing the number and types of complaints individuals may be submitting related to their short-term custody at CBP and ICE holding facilities. Moreover, CBP officials responsible for the CBP INFO Center indicated that trend analysis of complaints information would help CBP understand where there are potential operational issues and help the agency mitigate these issues. Additionally, a recent review of use-of-force incidents by the DHS OIG found that CBP should better analyze use-of-force data—which could be determined by complaints of employee misconduct—to inform departmental decision-making. According to Standards for Internal Control in the Federal Government, management should process the data it collects into quality information that can be used to support the internal control system. The Standards also call for management to develop procedures to monitor the performance of regular operations over time and that there be effective communication within and across agencies to help ensure appropriate decisions are made. Creating a classification code for holding facilities within the various DHS complaint tracking systems would allow DHS to more readily access data on complaints related to individuals’ short-term custody at CBP and ICE holding facilities. Such data could help DHS maintain greater visibility on the complaints, including complaint volume, facilities where complaints are filed, and differences across facility type— all of which could better position DHS to analyze and identify potential trends and use this information to inform management decisions. Additionally, analyzing this type of data for trends could help guide DHS’s efforts during annual compliance monitoring, such as including inspection areas related to common complaints filed in holding facilities. CBP and ICE maintain holding facilities across the nation, which contain basic features and are designed specifically for the short-term custody of individuals. CBP and ICE have standards and monitoring processes in place at the headquarters and field levels—including the amount of time an individual generally may be held—to help ensure that holding facilities are providing the appropriate care. While Border Patrol and ICE maintain systems to track time in custody, assessing the data to ensure its quality would improve its utility in accurately informing Border Patrol and ICE’s operations. Furthermore, fully assessing their time in custody data would help the components better understand the various factors impacting time in custody, and would better position them to identify steps, if needed, to address the amount of time individuals are held in custody. In addition, DHS and its components have a number of complaint mechanisms in place. However, providing guidance to holding facilities on which of DHS’s various complaint mechanisms they should communicate to individuals in custody would help CBP and ICE have better assurance that individuals in custody within holding facilities have received information on how to submit a complaint. In addition, developing a process for analyzing trends related to holding facility complaints would provide CBP and ICE with more information to oversee such facilities and aid in management decision-making. To enhance the monitoring of holding facilities, the Secretary of Homeland Security should direct Border Patrol and ICE to develop and implement a process to assess their time in custody data for all individuals in holding facilities, including: identifying and addressing potential data quality issues; and identifying cases where time in custody exceeded guidelines and assessing the factors impacting time in custody. To strengthen the transparency of the complaints process, the Secretary of Homeland Security should direct CBP and ICE to develop and issue guidance on how and which complaint mechanisms should be communicated to individuals in custody at holding facilities. To facilitate the tracking of holding facility complaints, we recommend that the Secretary of Homeland Security include a classification code in all complaint tracking systems related to DHS holding facilities. To provide useful information for compliance monitoring, the Secretary of Homeland Security should direct CBP and ICE to develop and implement a process for analyzing trends related to holding facility complaints across their respective component. We provided a draft of this report to DHS for review and comment. DHS provided written comments, which are noted below and reproduced in full in appendix II, and technical comments, which we incorporated as appropriate. DHS concurred with all four recommendations in the report and described actions underway or planned to address them. With regard to the first recommendation related to assessing time in custody data, DHS concurred and stated that Border Patrol and ICE will develop processes to assess time in custody data for all individuals in holding facilities. For example, ICE will take steps to validate length of stay data and identify potential data quality issues. With regard to the second recommendation that CBP and ICE develop and issue guidance on how and which complaint mechanisms should be communicated to individuals in holding facilities, DHS concurred and stated that CBP and ICE will develop and issue such guidance. For example, CBP plans to leverage an existing working group to develop and coordinate guidance on complaint mechanisms. With regard to the third recommendation that DHS include a classification code in all complaint tracking systems related to DHS holding facilities, DHS concurred and stated that the agency will take measures to add a code to tracking systems. Specifically, DHS plans to explore the feasibility of adding a source location code specific to holding facilities within tracking systems. With regard to the fourth recommendation that CBP and ICE develop and implement a process for analyzing trends related to holding facility complaints across their respective component, DHS concurred and stated that each component will institute a process. For example, CBP plans to develop reports on trends and patterns related to holding facilities. To the extent that CBP and ICE analyze trends in all complaint tracking systems, including JICMS, these steps should meet the intent of the recommendation. These planned actions, if fully implemented, should address the intent of the four recommendations contained in the report. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix III. Our objectives were to determine the extent to which the Department of Homeland Security (DHS) has (1) standards in place for the short-term custody of aliens and monitors compliance with established standards and (2) processes in place for obtaining and addressing complaints from aliens in holding facilities. For this report, our scope covered holding facilities operated by U.S. Customs and Border Protection (CBP) and U.S. Immigration and Customs Enforcement (ICE). Specifically, we included in our review holding facilities managed by CBP’s U.S. Border Patrol and Office of Field Operations (OFO), as well as ICE’s Enforcement and Removal Operations (ERO). Within OFO, we focused on holding facilities at land Ports of Entry (POE) and excluded air and maritime POEs since the environment at land POEs, including time in custody, is more similar to Border Patrol and ICE ERO holding facilities. To address these questions, we visited a nongeneralizable sample of 32 CBP and ICE holding facilities in California (July 2015), Florida (August/September 2015), Texas (November 2015) and Virginia (January 2016)—to, among other things, observe holding facility conditions and conduct semistructured interviews with holding facility personnel and senior officials with Border Patrol sectors, OFO field offices and ICE ERO field offices. Specifically, we visited 17 Border Patrol facilities, 7 OFO facilities, and 8 ICE facilities. We selected these facilities based on a mix of factors, such as facility type, differences in geographical location, number of apprehensions and recommendations made by DHS and advocacy organizations that work with individuals held in DHS’s custody. We focused the site visit interviews on holding facility standards, compliance mechanisms, and avenues for individuals to make complaints. The information we obtained from our holding facility visits cannot be generalized to all facilities, but provided us insights into the implementation of policies and procedures used by DHS to oversee holding facilities and manage complaints. Prior to our site visits, we interviewed five advocacy organizations to obtain their perspective on DHS’s management of holding facilities. We identified these organizations through similar GAO work and the recommendations of officials with advocacy organizations. While not generalizable, this sample of organizations provided us with insights into the perspectives of advocacy organizations regarding DHS’s short-term custody of aliens. To determine the extent to which DHS has standards in place for the short-term custody of aliens and monitors compliance with established standards, we reviewed agency documentation, including holding facility policies and procedures and self-inspection results. Specifically, we analyzed national standards for holding facilities covering, among other things, the conditions of confinement, such as the provision of meals and water, and time in custody. These standards include CBP’s October 2015 National Standards on Transport, Escort, Detention and Search; Border Patrol’s January 2008 Hold Rooms and Short Term Custody policy; OFO’s August 2008 Secure Detention, Transport, and Escort Procedures at Ports of Entry; and ICE’s September 2014 Operations of ERO Holding Facilities policy. To better understand the standards and monitoring processes in place, we interviewed Border Patrol, OFO, and ICE officials at the headquarters level that have responsibility for overseeing holding facilities, as well as holding facility personnel and sector/field office officials. During these interviews, among other things, we determined the extent to which agencies use and analyze data, such as time in custody, for oversight purposes and discussed the various factors that might impact time in custody. We assessed DHS practices for monitoring holding facilities against relevant standards in Standards for Internal Control in the Federal Government. In addition, we collected and analyzed fiscal year 2014 through 2015 Border Patrol data on apprehensions and alien time in custody—the most recent data maintained by Border Patrol at the time of our review—to determine the population and time in custody for aliens in holding facilities. To determine the reliability of this data, we reviewed Border Patrol documentation and interviewed agency officials responsible for ensuring data quality about e3—the system that Border Patrol uses to track information on aliens held in short-term custody. We determined that the apprehension data was sufficiently reliable for the purposes of our reporting objectives; however, we could not determine the reliability of the time in custody data because of potential irregularities, such as individuals indicated as having many months in custody, which we discuss in the report. We also collected data from ICE on the number of aliens in custody at ERO holding facilities; however, based on a review of ICE documentation and interviews with ICE officials responsible for ensuring data quality, we determined that the data was not reliable because of missing and inaccurate data, including a potentially significant over-count in the number of aliens in custody at one holding facility. Moreover, we were unable to analyze or determine the reliability of ICE data on time in custody because the agency does not include hours in custody in its standard reports. We were unable to obtain OFO data on number of aliens and their time of custody at holding facilities because the agency does not currently collect it nationwide. To determine the extent to which DHS has processes in place for obtaining and addressing complaints from aliens in holding facilities, we analyzed documentation on DHS Office of Inspector General (OIG), DHS Office for Civil Rights and Civil Liberties, ICE/CBP Joint Intake Center (JIC), CBP INFO Center, and ICE Detention and Reporting Information Line processes for managing complaints and interviewed officials from these complaint mechanisms. We learned from our review of documentation and interviews with agency officials that DHS complaint tracking systems generally do not have a classification code for holding facility complaints. To better understand the characteristics of these tracking systems, we analyzed fiscal year 2012-2014 data maintained in the Joint Integrity Case Management System (JICMS)—the system ICE and CBP use to track complaints reported to the JIC, including those related to holding facilities. We selected JICMS data to evaluate since it contains information on both ICE and CBP complaints. Based on this analysis, we confirmed that it was not possible to identify the universe of holding facility complaints in JICMS since the tracking system does not have a facility or issues type code associated with holding facilities. In addition, during our site visits, we interviewed holding facility officials on their local processes for obtaining and addressing complaints and evaluated how holding facilities communicated available complaint mechanisms. Specifically, we observed whether holding facilities posted information on available complaint mechanisms, such as the DHS OIG, in holding cells/rooms or in the processing area and we summarized the results of these observations by holding facility and complaint mechanism. We assessed DHS’s processes for obtaining and addressing complaints against relevant standards in Standards for Internal Control in the Federal Government. We conducted this performance audit from May 2015 to May 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Kirk Kiester (Assistant Director), Anthony Fernandez, Eric Hauswirth, Susan Hsu, Brian Lipman, Jon Najmi, Steven Rocker, and Mike Silver made key contributions to this report. | DHS is responsible for providing safe, secure, and humane confinement for detained aliens who may be subject to removal or have been ordered removed from the United States. For example, during fiscal years 2014 and 2015, Border Patrol apprehended 823,768 aliens and held them temporarily in holding facilities. GAO was asked to examine DHS's management and oversight of holding facilities. This report examines the extent to which DHS has (1) standards in place for the short-term custody of aliens and monitors compliance with established standards and (2) processes in place for obtaining and addressing complaints from aliens in holding facilities. GAO reviewed CBP and ICE data on time in custody and complaints. GAO also interviewed agency officials and visited 32 holding facilities selected based on geographical location and facility type, among other factors. The visit results are not generalizable, but provided insight to the oversight of holding facilities and management of complaints. The Department of Homeland Security's (DHS) U.S. Customs and Border Protection (CBP) and U.S. Immigration and Customs Enforcement (ICE) have standards for short-term holding facilities—which are generally designed to keep individuals in custody for 24 hours or less—and some processes to monitor compliance with the standards. For example, each component has policies governing the operation of holding facilities, and CBP has an annual Self-Inspection Program, which is designed to assess internal controls in all CBP operations, including holding facilities. However, U.S. Border Patrol, within CBP, and ICE do not have a process to fully assess data on the amount of time individuals are held in custody. Such a process could help these agencies in better understanding issues that GAO identified, such as data quality, level of compliance with agency standards, and factors impacting time in custody. For example, GAO identified potential irregularities with Border Patrol's fiscal year 2014 to 2015 time in custody data, due to, among other things, delays in agents recording individuals' “book-out” from holding facilities. In addition, although Border Patrol officials from 10 holding facilities GAO visited stated that time in custody rarely exceeds 72 hours, GAO noted that approximately 16 percent of Border Patrol's cases with complete data in fiscal years 2014 to 2015 exceeded this threshold. Developing and implementing a process to assess time in custody data, consistent with internal control standards, would provide Border Patrol and ICE with more visibility into the quality of their data, facility compliance with time in custody guidelines, and the factors impacting time in custody. DHS has various mechanisms to obtain and address complaints related to holding facilities. Specifically, individuals can submit complaints directly to holding facilities or to one of various DHS entities, including the DHS Office of Inspector General (OIG) and Joint Intake Center (JIC). However, DHS and its components have not consistently communicated information to individuals in CBP and ICE holding facilities on these mechanisms. For example, during site visits to DHS holding facilities, GAO observed that the posters used to communicate DHS complaint mechanisms varied in their coverage. Providing guidance to holding facilities on which of DHS's various complaint mechanisms they should communicate to individuals in custody, consistent with internal control standards, would help DHS have better assurance that individuals in custody within holding facilities have received information on how to submit a complaint. DHS complaint mechanisms maintain data in various systems; however, most of these systems do not have a classification code for holding facilities to would allow users to readily identify the universe of complaints involving holding facilities and conduct trend analysis. For example, the JIC's complaint tracking system does not include a facility, facility type, or issue code related to holding facilities. GAO found that information identifying whether a complaint involved a holding facility may be located within narrative fields. Creating a classification code and conducting trend analysis on holding facility complaints, consistent with internal control standards, would provide DHS with useful information for management decisions, including targeting areas for compliance monitoring. GAO recommends that DHS establish a process to assess time in custody data for all individuals in holding facilities; issue guidance on how and which complaint mechanisms should be communicated to individuals in short-term custody; include a classification code in all complaint tracking systems related to DHS holding facilities; and develop a process for analyzing trends related to holding facility complaints. DHS concurred with the recommendations and identified planned actions. |
See GAO-09-399. property transported by commercial passenger aircraft. At the 463 TSA- regulated airports in the United States, prior to boarding an aircraft, all passengers, their accessible property, and their checked baggage are screened pursuant to TSA-established procedures, which include passengers passing through security checkpoints where they and their identification documents are checked by transportation security officers (TSO) and other TSA employees or by private sector screeners under TSA’s Screening Partnership Program. Airport operators, however, are directly responsible for implementing TSA security requirements, such as those relating to perimeter security and access controls, in accordance with their approved security programs and other TSA direction. TSA relies upon multiple layers of security to deter, detect, and disrupt persons posing a potential risk to aviation security. These layers include behavior detection officers (BDO), who examine passenger behaviors and appearances to identify passengers who might pose a potential security risk at TSA-regulated airports; TSA has selectively deployed about 3,000 BDOs to 161 of 463 TSA-regulated airports in the United States, including Boston-Logan airport where the program was initially deployed in 2003. Other security layers include travel document checkers, who examine tickets, passports, and other forms of identification; TSOs responsible for screening passengers and their carry- on baggage at passenger checkpoints, using x-ray equipment, magnetometers, Advanced Imaging Technology, and other devices; random employee screening; and checked baggage screening systems. Additional layers cited by TSA include, among others, intelligence gathering and analysis; passenger prescreening against terrorist watchlists; random canine team searches at airports; federal air marshals, who provide federal law enforcement presence on selected flights operated by U.S. air carriers; Visible Intermodal Protection Response (VIPR) teams; reinforced cockpit doors; the passengers themselves; as well as other measures both visible and invisible to the public. Figure 1 shows TSA’s layers of aviation security. TSA has also implemented a variety of programs and protective actions to strengthen airport perimeters and access to sensitive areas of the airport, including conducting additional employee background checks and assessing different biometric-identification technologies. Airport perimeter and access control security is intended to prevent unauthorized access into secure areas of an airport—either from outside or within the airport complex. According to TSA, each one of these layers alone is capable of stopping a terrorist attack. TSA states that the security layers in combination multiply their value, creating a much stronger system, and that a terrorist who has to overcome multiple security layers to carry out an attack is more likely to be pre-empted, deterred, or to fail during the attempt. We reported in May 2010 that TSA deployed SPOT nationwide before first determining whether there was a scientifically valid basis for using behavior and appearance indicators as a means for reliably identifying passengers who may pose a risk to the U.S. aviation system. DHS’s Science and Technology Directorate completed a validation study in April 2011 to determine the extent to which SPOT was more effective than random screening at identifying security threats and how the program’s behaviors correlate to identifying high-risk travelers. However, as noted in the study, the assessment was an initial validation step, but was not designed to fully validate whether behavior detection can be used to reliably identify individuals in an airport environment who pose a security risk. According to DHS, additional work will be needed to comprehensively validate the program. According to TSA, SPOT was deployed before a scientific validation of the program was completed to help address potential threats to the aviation system, such as those posed by suicide bombers. TSA also stated that the program was based upon scientific research available at the time regarding human behaviors. We reported in May 2010 that approximately 14,000 passengers were referred to law enforcement officers under SPOT from May 2004 through August 2008. Of these passengers, 1,083 were arrested for various reasons, including being illegal aliens (39 percent), having outstanding warrants (19 percent), and possessing fraudulent documents (15 percent). The remaining 27 percent were arrested for other reasons. As noted in our May 2010 report, SPOT officials told us that it is not known if the SPOT program has resulted in the arrest of anyone who is a terrorist, or who was planning to engage in terrorist-related activity. According to TSA, in fiscal year 2010, SPOT referred about 50,000 passengers for additional screening and about 3,600 referrals to law enforcement officers. The referrals to law enforcement officers yielded approximately 300 arrests. Of these 300 arrests, TSA stated that 27 percent were illegal aliens, 17 percent were drug-related, 14 percent were related to fraudulent documents, 12 percent were related to outstanding warrants, and 30 percent were related to other offenses. DHS has requested about $254 million for fiscal year 2012 for the SPOT program, which would support an additional 350 (or 175 full-time equivalent) BDOs. If TSA receives its requested appropriation, TSA will be in a position to have invested about $1 billion in the SPOT program since fiscal year 2007. According to TSA, as of August 2011, TSA is pilot testing revised procedures for BDOs at Boston-Logan airport to engage passengers entering screening in casual conversation to help determine suspicious behaviors. According to TSA, after a passenger’s travel documents are verified, a BDO will briefly engage each passenger in conversation. If more information is needed to help determine suspicious behaviors, the officer will refer the passenger to a second BDO for a more thorough conversation to determine if additional screening is needed. TSA noted that these BDOs have received additional training in interviewing methods. TSA plans to expand this pilot program to additional airports in the fall of 2011. A 2008 report issued by the National Research Council of the National Academy of Sciences stated that the scientific evidence for behavioral monitoring is preliminary in nature. The report also noted that an information-based program, such as a behavior detection program, should first determine if a scientific foundation exists and use scientifically valid criteria to evaluate its effectiveness before deployment. The report added that such programs should have a sound experimental basis and that the documentation on the program’s effectiveness should be reviewed by an independent entity capable of evaluating the supporting scientific evidence. According to the report, a terrorist’s desire to avoid detection makes information-gathering techniques, such as asking what a person has done, is doing, or plans to do, highly unreliable. Using these techniques to elicit information could also have definite privacy implications. These findings, in particular, may be important as TSA moves forward with its pilot program to expand BDOs’ use of conversation and interviews with all passengers entering screening. As we reported in May 2010, an independent panel of experts could help DHS develop a comprehensive methodology to determine if the SPOT program is based on valid scientific principles that can be effectively applied in an airport environment for counterterrorism purposes. Thus, we recommended that the Secretary of Homeland Security convene an independent panel of experts to review the methodology of the validation study on the SPOT program being conducted to determine whether the study’s methodology was sufficiently comprehensive to validate the SPOT program. We also recommended that this assessment include appropriate input from other federal agencies with expertise in behavior detection and relevant subject matter experts. DHS concurred and stated that its validation study, completed in April 2011, included an independent review of the study with input from a broad range of federal agencies and relevant experts, including those from academia. DHS’s validation study found that SPOT was more effective than random screening to varying degrees. For example, the study found that SPOT was more effective than random screening at identifying individuals who possessed fraudulent documents and identifying individuals who law enforcement officers ultimately arrested. However, DHS noted that the identification of such high-risk passengers was rare in both the SPOT and random tests. In addition, DHS determined that the base rate, or frequency, of SPOT behavioral indicators observed by TSA to detect suspicious passengers was very low and that these observed indicators were highly varied across the traveling public. Although details about DHS’s findings related to these indicators are sensitive security information, the low base rate and high variability of traveler behaviors highlights the challenge that TSA faces in effectively implementing a standardized list of SPOT behavioral indicators. In addition, DHS outlined several limitations to the study. For example, the study noted that BDOs were aware of whether individuals they were screening were referred to them as the result of identified SPOT indicators or random selection. DHS stated that this had the potential to introduce bias into the assessment. DHS also noted that SPOT data from January 2006 through October 2010 were used in its analysis of behavioral indicators even though questions about the reliability of the data exist. In May 2010, we reported weaknesses in TSA’s process for maintaining operational data from the SPOT program database. Specifically, the SPOT database did not have computerized edit checks built into the system to review the format, existence, and reasonableness of data. In another example, BDOs could not input all behaviors observed in the SPOT database because the database limited entry to eight behaviors, six signs of deception, and four types of prohibited items per passenger referred for additional screening. Because of these data- related issues, we reported that meaningful analyses could not be conducted at that time to determine if there is an association between certain behaviors and the likelihood that a person displaying certain behaviors would be referred to a law enforcement officer or whether any behavior or combination of behaviors could be used to distinguish deceptive from nondeceptive individuals. In our May 2010 report, we recommended that TSA establish controls for this SPOT data. DHS agreed and TSA has established additional data controls as part of its database upgrade. However, some of DHS’s analysis for this study used SPOT data recorded prior to these additional controls being implemented. The study also noted that it was not designed to comprehensively validate whether SPOT can be used to reliably identify individuals in an airport environment who pose a security risk. The DHS study made recommendations related to strengthening the program and conducting a more comprehensive validation of whether the science can be used for counterterrorism purposes in the aviation environment. Some of these recommendations, such as the need for a comprehensive program evaluation including a cost-benefit analysis, reiterate recommendations made in our May 2010 report. TSA is currently reviewing the study’s findings and assessing the steps needed to address DHS’s recommendations but does not have time frames for completing this work. If TSA decides to implement the recommendations in the April 2011 DHS validation study, DHS may be years away from knowing whether there is a scientifically valid basis for using behavior detection techniques to help secure the aviation system against terrorist threats given the broad scope of the additional work and related resources identified by DHS for addressing the recommendations. Thus, as we reported in March 2011, Congress may wish to consider the study’s results in making future funding decisions regarding the program. We reported in September 2009 that TSA has implemented a variety of programs and actions since 2004 to improve and strengthen airport perimeter and access controls security, including strengthening worker screening and improving access control technology. For example, to better address the risks posed by airport workers, in 2007 TSA implemented a random worker screening program that was used to enforce access procedures, such as ensuring workers display appropriate credentials and do not possess unauthorized items when entering secure areas. According to TSA officials, this program was developed to help counteract the potential vulnerability of airports to an insider attack—an attack from an airport worker with authorized access to secure areas. TSA has also expanded its requirements for conducting worker background checks and the population of individuals who are subject to these checks. For example, in 2007 TSA expanded requirements for name-based checks to all individuals seeking or holding airport-issued identification badges and in 2009 began requiring airports to renew all airport-identification media every 2 years. TSA also reported taking actions to identify and assess technologies to strengthen airport perimeter and access controls security, such as assisting the aviation industry and a federal aviation advisory committee in developing security standards for biometric access controls. However, we reported in September 2009 that while TSA has taken actions to assess risk with respect to airport perimeter and access controls security, it had not conducted a comprehensive risk assessment based on assessments of threats, vulnerabilities, and consequences, as required by DHS’s National Infrastructure Protection Plan (NIPP). We further reported that without a full depiction of threats, vulnerabilities, and consequences, an organization’s ability to establish priorities and make cost-effective security decisions is limited. We recommended that TSA develop a comprehensive risk assessment, along with milestones for completing the assessment. DHS concurred with our recommendation and said it would include an assessment of airport perimeter and access control security risks as part of a comprehensive assessment for the transportation sector—the Transportation Sector Security Risk Assessment (TSSRA). The TSSRA, published in July 2010, included an assessment of various risk-based scenarios related to airport perimeter security but did not consider the potential vulnerabilities of airports to an insider attack—the insider threat—which it recognized as a significant issue. In July 2011, TSA officials told us that the agency is developing a framework for insider risk that is to be included in the next iteration of the assessment, which TSA expected to be released at the end of calendar year 2011. Such action, if taken, would meet the intent of our recommendation. We also recommended that, as part of a comprehensive risk assessment of airport perimeter and access controls security, TSA evaluate the need to conduct an assessment of security vulnerabilities at airports nationwide. At the time of our review, TSA told us its primary measures for assessing the vulnerability of airports to attack were professional judgment and the collective results of joint vulnerability assessments (JVA) it conducts with the Federal Bureau of Investigation (FBI) for select—usually high-risk—airports. Our analysis of TSA data showed that from fiscal years 2004 through 2008, TSA conducted JVAs at about 13 percent of the approximately 450 TSA-regulated airports that existed at that time, thus leaving about 87 percent of airports unassessed. TSA has characterized U.S. airports as an interdependent system in which the security of all is affected or disrupted by the security of the weakest link. However, we reported that TSA officials could not explain to what extent the collective JVAs of specific airports constituted a reasonable systems- based assessment of vulnerability across airports nationwide. Moreover, TSA officials said that they did not know to what extent the 87 percent of commercial airports that had not received a JVA as of September 2009— most of which were smaller airports—were vulnerable to an intentional security breach. DHS concurred with our 2009 report recommendation to assess the need for a vulnerability assessment of airports nationwide, and TSA officials stated that based on our review they intended to increase the number of JVAs conducted at Category II, III, and IV airports and use the resulting data to assist in prioritizing the allocation of limited resources. Our analysis of TSA data showed that from fiscal year 2004 through July 1, 2011, TSA conducted JVAs at about 17 percent of the TSA-regulated airports that existed at that time, thus leaving about 83 percent of airports unassessed. Since we issued our report in September 2009, TSA had not conducted JVAs at Category III and IV airports. TSA stated that the TSSRA is to provide a comprehensive risk assessment of airport security, but could not tell us to what extent it has studied the need to conduct JVAs of security vulnerabilities at airports nationwide. Additionally, in August 2011 TSA reported that its national inspection program requires that transportation security inspectors conduct vulnerability assessments at all commercial airports, which are based on the joint vulnerability assessment model. According to TSA, every commercial airport in the United States receives a security assessment each year, including an evaluation of perimeter security and access controls. We have not yet assessed the extent to which transportation security inspectors consistently conduct vulnerability assessments based on the joint vulnerability model. Providing additional information on how and to what extent such security assessments have been performed would more fully address our recommendation. We also reported in September 2009 that TSA’s efforts to enhance the security of the nation’s airports have not been guided by a national strategy that identifies key elements, such as goals, priorities, performance measures, and required resources. To better ensure that airport stakeholders take a unified approach to airport security, we recommended that TSA develop a national strategy for airport security that incorporates key characteristics of effective security strategies, such as measurable goals and priorities. DHS concurred with this recommendation and stated that TSA would implement it by updating the Transportation Systems-Sector Specific Plan (TS-SSP), to be released in the summer of 2010. TSA provided a copy of the updated plan to congressional committees in June 2011 and to us in August 2011. We reviewed this plan and its accompanying aviation model annex and found that while the plan provided a high-level summary of program activities for addressing airport security such as the screening of workers, the extent to which these efforts would be guided by measurable goals and priorities, among other things, was not clear. Providing such additional information would better address the intent of our recommendation. Chairman McCaul, Ranking Member Keating, and Members of the Subcommittee, this concludes my statement. I look forward to answering any questions that you may have at this time. For questions about this statement, please contact Stephen M. Lord at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony are David M. Bruno and Steve Morris, Assistant Directors; Ryan Consaul; Barbara Guffy; Tracey King; Tom Lombardi; and Lara Miklozek. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The attempted bombing of Northwest flight 253 in December 2009 underscores the need for effective aviation security programs. Aviation security remains a daunting challenge with hundreds of airports and thousands of flights daily carrying millions of passengers and pieces of checked baggage. The Department of Homeland Security's (DHS) Transportation Security Administration (TSA) has spent billions of dollars and implemented a wide range of aviation security initiatives. Two key layers of aviation security are (1) TSA's Screening of Passengers by Observation Techniques (SPOT) program designed to identify persons who may pose a security risk; and (2) airport perimeter and access controls security. This testimony provides information on the extent to which TSA has taken actions to validate the scientific basis of SPOT and strengthen airport perimeter security. This statement is based on prior products GAO issued from September 2009 through September 2011 and selected updates in August and September 2011. To conduct the updates, GAO analyzed documents on TSA's progress in strengthening aviation security, among other things. DHS completed an initial study in April 2011 to validate the scientific basis of the SPOT program; however, additional work remains to fully validate the program. In May 2010, GAO reported that TSA deployed this program, which uses behavior observation and analysis techniques to identify potentially high-risk passengers, before determining whether there was a scientifically valid basis for using behavior and appearance indicators as a means for reliably identifying passengers who may pose a risk to the U.S. aviation system. TSA officials said that SPOT was deployed in response to potential threats, such as suicide bombers, and was based on scientific research available at the time. TSA is pilot testing revised program procedures at Boston-Logan airport in which behavior detection officers will engage passengers entering screening in casual conversation to help determine suspicious behaviors. TSA plans to expand this pilot program in the fall of 2011. GAO recommended in May 2010 that DHS, as part of its validation study, assess the methodology to help ensure the validity of the SPOT program. DHS concurred and stated that the study included an independent review with a broad range of agencies and experts. The study found that SPOT was more effective than random screening to varying degrees. However, DHS's study was not designed to fully validate whether behavior detection can be used to reliably identify individuals in an airport environment who pose a security risk. The study also noted that additional work was needed to comprehensively validate the program. TSA officials are assessing the actions needed to address the study's recommendations but do not have time frames for completing this work. In September 2009 GAO reported that since 2004 TSA has taken actions to strengthen airport perimeter and access controls security by, among other things, deploying a random worker screening program; however, TSA had not conducted a comprehensive risk assessment or developed a national strategy. Specifically, TSA had not conducted vulnerability assessments for 87 percent of the approximately 450 U.S. airports regulated for security by TSA in 2009. GAO recommended that TSA develop (1) a comprehensive risk assessment and evaluate the need to conduct airport vulnerability assessments nationwide and (2) a national strategy to guide efforts to strengthen airport security. DHS concurred and TSA stated that the Transportation Sector Security Risk Assessment, issued in July 2010, was to provide a comprehensive risk assessment of airport security. However, this assessment did not consider the potential vulnerabilities of airports to an insider attack--an attack from an airport worker with authorized access to secure areas. In August 2011, TSA reported that transportation security inspectors conduct vulnerability assessments annually at all commercial airports, including an evaluation of perimeter security. GAO has not yet assessed the extent to which inspectors consistently conduct vulnerability assessments. TSA also updated the Transportation Systems-Sector Specific Plan, which summarizes airport security program activities. However, the extent to which these activities were guided by measurable goals and priorities, among other things, was not clear. Providing such additional information would better address GAO's recommendation. GAO has made recommendations in prior work to strengthen TSA's SPOT program and airport perimeter and access control security efforts. DHS and TSA generally concurred with the recommendations and have actions under way to address them. |
Since the early 1990s, hundreds of residential treatment programs and facilities have been established in the United States by state agencies and private companies. Many of these programs are intended to provide a less- restrictive alternative to incarceration or hospitalization for youth who may require intervention to address emotional or behavioral challenges. As mentioned earlier, it is difficult to obtain an overall picture of the extent of this industry. According to a 2006 report by the Substance Abuse and Mental Health Services Administration, state officials identified 71 different types of residential treatment programs for youth with mental illness across the country. A wide range of government or private entities, including government agencies and faith-based organizations, can operate these programs. Each residential treatment program may focus on a specific client type, such as those with substance abuse disorders or suicidal tendencies. In addition, the programs provide a range of services, either on-site or through links with community programs, including educational, medical, psychiatric, and clinical/mental health services. Regarding oversight of residential treatment programs, states have taken a variety of approaches ranging from statutory regulations that require licensing to no oversight. States differ in how they license and monitor the various types of programs in terms of both the agencies involved and the types of requirements. For example, some states have centralized licensing and monitoring within a single agency, while other states have decentralized these functions among three or more different agencies. There are currently no federal laws that define and regulate residential treatment programs. However, three federal agencies—the Departments of Health and Human Services, Justice, and Education—administer programs that can provide funds to states to support eligible youth who have been placed in some residential treatment programs. For example, the Department of Health and Human Services, through its Administration for Children and Families, administers programs that provide funding to states for a wide range of child welfare services, including foster care, as well as improved handling, investigation, and prosecution of youth maltreatment cases. In addition to the lack of a standard, commonly recognized definition for residential treatment programs, there are no standard definitions for specific types of programs—wilderness therapy programs, boot camps, and boarding schools, for instance. For our purposes, we define these programs based on the characteristics we identified during our review of the 10 case studies. For example, in the context of our report, we defined wilderness therapy program to mean a program that places youth in different natural environments, including forests, mountains, and deserts. Figure 1 shows images we took near the wilderness therapy programs we visited. According to wilderness therapy program material, these settings are intended to remove the “distractions” and “temptations” of modern life from teens, forcing them to focus on themselves and their relationships. Included as part of a wilderness training program, participants keep journals that often include entries related to why they are in the program and their experiences and goals while in the wilderness. These journals, which program staff read, are part of the individual and group therapy provided in the field. As part of the wilderness experience, these programs also teach basic survival skills, such as setting up a tent and camp, starting a fire, and cooking food. Figure 2 is photo montage of living arrangements for youth enrolled in the wilderness programs we visited. Some wilderness therapy programs may include a boot camp element. However, many boot camps (which can also be called behavioral modification facilities) exist independently of wilderness training. In the context of our report, a boot camp is a residential treatment program in which strict discipline and regime are dominant principles. Some military- style boot camp programs also emphasize uniformity and austere living conditions. Figure 3 is a photo montage illustrating a boot camp which minimizes creature comfort and emphasizes organization and discipline. A third type of residential treatment program is known as a boarding school. Although these programs may combine wilderness or boot camp elements, boarding schools (also called academies) are generally advertised as providing academic education beyond the survival skills a wilderness therapy program might teach. This academic education is sometimes approved by the state in which the program operates and may also be transferable as elective credits toward high school. These programs often enroll youth whose parents force them to attend against their will. The schools can include fences and other security measures to ensure that youth do not leave without permission. Figure 4 shows some of the features boarding schools may employ to keep youth in the facilities. A variety of ancillary services related to residential treatment programs are available for an additional fee in some programs. These services include: Referral services and educational consultants to assist parents in selecting a program. Transport services to pick up a youth and bring him or her to the program. Parents frequently use a transport service if their child is unwilling to attend the program. Additional individual, group, or family counseling or therapy sessions as part of treatment. These services may be located on the premises or nearby. Financial services, such as loans, to assist parents in covering the expense of residential treatment programs. These services are marketed toward parents and, with the exception of financial services, are not regulated by the federal government. We found thousands of allegations of abuse, some of which involved death, at public and private residential treatment programs across the country between the years 1990 and 2007. We are unable to identify a more concrete number of allegations because we could not locate a single Web site, federal agency, or other entity that collects comprehensive nationwide data related to this issue. Although the NCANDS database, operated by the Department of Health and Human Services, collects some data from states, data submission is voluntary and not all states with residential treatment programs contribute information. According to the most recent NCANDS data, during 2005 alone 33 states reported 1,619 staff members involved in incidents of abuse in residential programs. Because of limited data collection and reporting, we could not determine the numbers of incidents of abuse and death associated with private programs. It is important to emphasize that allegations should not be confused with proof of actual abuse. However, in terms of meeting our objective, the thousands of allegations we found came from a number of sources besides NCANDS. For example: We identified claims of abuse and death in pending and closed civil or criminal proceedings with dozens of plaintiffs alleging abuse. For instance, according to one pending civil lawsuit filed as recently as July 2007, dozens of parents allege that their children were subjected to over 30 separate types of abuse. We found attorneys around the country who represent youth and groups of youth who allege that abuse took place while these youth were enrolled in residential treatment programs. For example, an attorney based in New Jersey with whom we spoke has counseled dozens of youth who alleged they were abused in residential treatment programs in past cases, as has another attorney, a retired prosecutor, who advocates for abuse victims. We found that allegations are posted on various Web sites advocating for the shutdown of certain programs. Past participants in wilderness programs and other youth residential treatment programs have individually or collectively set up sites claiming abuse and death. The Internet contains an unknown number of such Web sites. One site on the Internet, for example, identifies over 100 youth who it claims died in various programs. In other instances, parents of victims who have died or were abused in these programs have similarly set up an unknown number of Web sites. Conversely, there are also an unknown number of sites that promote and advocate the benefits of various programs. Because there are no specific reporting requirements or definitions for private programs in particular, we could not determine what percentage of the thousands of allegations we found are related to such programs. There is likely a small percentage of overlapping allegations given our inability to reconcile information from the sources we used. We selected 10 closed cases from private programs to examine in greater detail. Specifically, these cases were focused on the death of a teenager in a private residential treatment program that occurred between 1990 and 2004. We found significant evidence of ineffective management in most of these 10 cases, with many examples of how program leaders neglected the needs of program participants and staff. In some cases, program leaders gave their staff bad advice when they were alerted to the health problems of a teen. In other cases, program leaders appeared to be so concerned with boosting enrollment that they told parents their programs could provide services that they were not qualified to offer and could not provide. Several cases reveal program leaders who claimed to have credentials in therapy or medicine that they did not have, leading parents to trust them with teens who had serious mental or physical disabilities requiring proper treatment. These ineffective management techniques compounded the negative consequences of (and sometimes directly resulted in) the hiring of untrained staff; a lack of adequate nourishment; and reckless or negligent operating practices, including a lack of adequate equipment. These specific factors played a significant role in most of the deaths we examined. Untrained staff. A common theme of many of the cases we examined is that staff misinterpreted legitimate medical emergencies. Rather than recognizing the signs of dehydration, heat stroke, or illness, staff assumed that a dying teen was in fact attempting to use trickery to get out of the program. This resulted in the death of teenagers from common, treatable illnesses. In some cases, teens who fell ill from less- common ailments exhibited their symptoms for many days, dying slowly while untrained staff continued to believe the teen was “faking it.” Unfortunately, in almost all of our cases, staff only realized that a teen was in distress when it was already too late. Lack of adequate nourishment. In many cases, program philosophy (e.g., “tough love”) was taken to such an extreme that teenagers were undernourished. One program fed teenagers an apple for breakfast, a carrot for lunch, and a bowl of beans for dinner while requiring extensive physical activity in harsh conditions. Another program forced teenagers to fast for 2 days. Teenagers were also given equal rations of food regardless of their height, weight, or other dietary needs. In this program, an ill teenager lost 20 percent of his body weight over the course of about a month. Unbeknownst to staff, the teenager was simultaneously suffering from a perforated ulcer. Reckless or negligent operating practices. In at least two cases, program staff set out to lead hikes in unfamiliar territory that they had not scouted in advance. Important items such as radios and first aid kits were left behind. In another case, program operators did not take into account the need for an adjustment period between a teenager’s comfortable home life and the wilderness; this endangered the safety of one teenager, who suddenly found herself in an unfamiliar environment. State licensing initiatives attempt, in part, to minimize the risk that some programs may endanger teenagers through reckless and negligent practices; however, not all programs we examined were covered by operating licenses. Furthermore, some licensed programs deviated from the terms of their licenses, leading states, after the death of a teen, to take action against programs that had flouted health and safety guidelines. See table 1 for a summary of the cases we examined. The victim was a 15-year-old female. Her parents told us that she was a date-rape victim who suffered from depression, and that in 1990 she enrolled in a 9-week wilderness program in Utah to build confidence and improve her self-esteem. The victim and her parents found out about the program through a friend who claimed to know the owner. The parents of the victim spoke with the owner of the program several times and reviewed brochures from the owner. The brochure stated that the program’s counselors were “highly trained survival experts” and that “the professional experience and expertise” of its staff was “unparalleled.” The fees and tuition for the program cost a little over $20,600 (or about $327 per day). The victim and her parents ultimately decided that this program would meet their needs and pursued enrollment. The victim’s parents said they trusted the brochures, the program owner, and the program staff. However, the parents were not informed that the program was completely new and that their daughter would be going on the program’s first wilderness trek. Program staff were not familiar with the area, relied upon maps and a compass to navigate the difficult terrain, and became lost. As a result, they crossed into the state of Arizona and wandered onto Bureau of Land Management (BLM) land. According to a lawsuit filed by her parents, the victim complained of general nausea, was not eating, and began vomiting water on about the third day of the 5-day hike. Staff ignored her complaints and thought she was “faking it” to get out of the program. Police documents indicate that the two staff members leading the hike stated that they did not realize the victim was slowly dehydrating, despite the fact that she was vomiting water and had not eaten any food. On the fifth day of the hike, the victim fell several times and was described by the other hikers as being “in distress.” It does not appear that staff took any action to help her. At about 5:45 p.m. on the fifth day, the victim collapsed in the road and stopped breathing. According to police records, staff did not call for help because they were not equipped with radios— instead, they performed CPR and attempted to signal for help using a signal fire. CPR did not revive the victim; she died by the side of the road and her body was covered with a tarp. The following afternoon, a BLM helicopter airlifted her body to a nearby city for autopsy. The death certificate for the victim states that she died of dehydration due to exposure. Although local police investigated the death, no charges were filed. Utah officials wanted to pursue the case, but they did not have grounds to do so because the victim died in Arizona. The parents of the victim filed a civil suit and settled out of court for an undisclosed sum. Soon after the victim’s death and 6 months after opening, the founder closed the program and moved to Nevada, where she operated in that state until her program was ordered to close by authorities there. In a hearing granting a preliminary judgment that enjoined the operator of the program, the judge said that he would not shelter this program, which was in effect hiding from the controls of the adjoining state. He chastised the program owner for running a money-making operation while trying to escape the oversight of the state, writing, “ wishes to conduct a wilderness survival program for children for profit, without state regulation” and she “hide the children from the investigating state authorities and appear uncooperative towards them.” He expressed further concerns, including a statement that participants in the program did not appear to be receiving “adequate care and protection” and that qualified and competent counselors were not in charge of the program. The judge also noted that one of the adult counselors was “an ex-felon and a fugitive.” After this program closed, the program founder returned to Utah and joined yet another program where another death occurred 5 years later (this death is detailed in case seven). We found that the founder of this residential treatment program had a history in the industry—prior to opening the program discussed in this case, she worked as an administrator in the program covered in another case (case two). Today, the program founder is still working in the industry as a consultant, providing advice to parents who may not know of her history. The victim was a 16-year-old female who had just celebrated her birthday. According to her mother, in 1990 the victim was enrolled in a 9-week wilderness therapy program because she suffered from depression and struggled with drug abuse. The victim’s mother obtained brochures from the program owner and discussed the program with him and other program staff. According to the mother, the program owner answered all her questions and “really sold the program.” She told us: “I understood there would be highly trained and qualified people with who could handle any emergency… they boasted of a 13-year flawless safety record, I thought to myself ‘why should I worry? Why would anything happen to her?’” Believing that the program would help her daughter, the victim’s mother and stepfather secured a personal loan to pay the $25,600 in tuition for the program (or about $400 per day). She also paid about $4,415 to have a transport service come to the family home and take her daughter to the program. The victim’s mother and stepfather hired the service because they were afraid their daughter would run away when told that she was being enrolled in the program. According to the victim’s mother, two people came to the family home at 4 a.m. to take her daughter to the program’s location in the Utah desert, where a group hike was already under way. Three days into the program, the victim collapsed and died while hiking. According to the program brochure, the first 5 days of the program are “days and nights of physical and mental stress with forced march, night hikes, and limited food and water. Youth are stripped mentally and physically of material facades and all manipulatory tools.” After the victim collapsed, one of the counselors on the hike administered CPR until an emergency helicopter and nurse arrived to take the victim to a hospital, where she was pronounced dead. According to the victim’s mother, her daughter died of “exertional heatstroke.” The program had not made any accommodation or allowed for any adjustment for the fact that her daughter had traveled from a coastal, sea-level residence in Florida to the high desert wilderness of Utah. The mother of the victim also said that program staff did not have salt tablets or other supplies that are commonly used to offset the affects of heat. Shortly after the victim died, the 9-week wilderness program closed. A state hearing brought to light complaints of child abuse in the program and the owner of the program was charged with negligent homicide. He was acquitted of criminal charges. However, the state child protective services agency concluded that child abuse had occurred and placed the owner on Utah’s registry of child abusers, preventing him from working in the state at a licensed child treatment facility. Two other program staff agreed to cooperate with the prosecution to avoid standing trial; these staff were given probation and prohibited from being involved with similar programs for up to 5 years. In 1994, the divorced parents of the victim split a $260,000 settlement resulting from a civil suit against the owner. After this program closed, its owner opened and operated a number of domestic and foreign residential treatment programs over the next several years. Although he was listed on the Utah registry of suspected child abusers, the program owner opened and operated these programs elsewhere—many of which were ultimately shut down by state officials and foreign governments because of alleged and proven child abuse. At least one of these programs is still operating abroad and is marketed on the Internet, along with 10 other programs considered to be part of the same network. As discussed above, the program owner in our first case originally worked in this program as an administrator before it closed. The victim was a 16-year-old male. According to his parents, in 1994 they enrolled him in a 9-week wilderness therapy program in Utah because of minor drug use, academic underachievement, and association with a new peer group that was having a negative impact on him. The parents learned of the program from an acquaintance and got a program brochure that “looked great” in their opinion. They thought the program was well-suited for their son because it was an outdoor program focusing on small groups of youth who were about the same age. They spoke with the program owner and his wife, who flew to Phoenix, Arizona, to talk with them. To be able to afford the program’s cost of about $18,500 (or $263 per day), the victim’s parents told us they took out a second mortgage on their house. They also paid nearly $2,000 to have their son transported to the campsite in the program owner’s private plane. At the time they enrolled their son, the parents were unaware that this program was started by two former employees of a program where a teenager had died (this program is discussed in our second case). According to the victim’s father, his son became sick around the 11th day of the program. According to court and other documents, the victim began exhibiting signs of physical distress and suffered from severe abdominal pain, weakness, weight loss, and loss of bodily functions. Although the victim collapsed several times during daily hikes, accounts we reviewed indicate that staff ignored the victim’s pleas for help. He was forced to continue on for 20 days in this condition. After his final collapse 31 days into the program, staff could not detect any respiration or pulse. Only at this time did staff radio program headquarters and request help, although they were expected to report any illnesses or disciplinary incidents and had signed an agreement when employed stating that they were responsible for “the safety and welfare of fellow staff members and students.” The victim was airlifted to a nearby hospital and was pronounced dead upon arrival. The 5-foot 10-inch victim, already a thin boy, had dropped from 131 to 108 pounds—a loss of nearly 20 percent of his body weight during his month-long enrollment. The victim’s father told us that when he was notified of his son’s death, he could only think that “some terrible accident” had occurred. But according to the autopsy report, the victim died of acute peritonitis—an infection related to a perforated ulcer. This condition would have been treatable provided there had been early medical attention. The father told us that the mortician, against his usual policy, showed him the condition of his son’s body because it was “something that needed to be investigated.” The victim’s father told us he “buckled at the knees” when he saw the body of his son—emaciated and covered with cuts, bruises, abrasions, blisters, and a full-body rash; what he saw was unrecognizable as his son except for a childhood scar above the eye. In the wake of the death, the state revoked the program’s operating license. According to the state’s licensing director, the program closed 3 months later because the attorney general’s office had initiated an investigation into child abuse in the program, although no abuse was found after examining the 30 to 40 youth who were also enrolled in the program when the victim died. The state attorney general’s office and a local county prosecutor filed criminal charges against the program owners and several staff members. After a change of venue, one defendant went to trial and was convicted of “abuse or neglect of a disabled child” in this case. Five other defendants pleaded guilty to a number of other charges— five guilty pleas on negligent homicide and two on failure to comply with a license. The defendants in the case were sentenced to probation and community service. The parents of the victim subsequently filed a civil suit that was settled out of court for an undisclosed amount. The victim was a 15-year-old male. According to the victim’s mother, in 2000 she enrolled her son in a wilderness program in Oregon to build his confidence and develop self-esteem in the wake of a childhood car accident. The accident had resulted in her son sustaining a severe head injury, among other injuries. After an extensive Internet search and discussions with representatives of various wilderness programs and camps for head-injury victims, the mother told us she selected a program that she believed would meet her son’s needs. What “sold me on the program,” she said, was the program owner’s repeated assurances over the telephone that the program was “a perfect fit” for her son. She told us that to pay for the $27,500 program, she withdrew money from her retirement account. The program was between 60 to 90 days (about $305 to $450 per day) depending on a youth’s progression through the program. The victim’s mother said that she became suspicious about the program when she dropped her son off. She said that the program director and another staff person disregarded her statements about her son’s “likes and dislikes,” despite believing that the program would take into account the personal needs of her son. Later, she filed a lawsuit alleging that the staff had no experience dealing with brain-injured children and others with certain handicaps who were in the program. What she also did not know was that the founder of the program was himself a former employee of two other wilderness programs in another state where deaths had occurred (we discuss these programs in cases two and three). The program founder also employed staff who had been charged with child abuse while employed at other wilderness programs. According to her lawsuit, her son left the program headquarters on a group hike with three counselors and three other students. Several days into the multiday hike, while camping under permit on BLM land, the victim refused to return to the campsite after being escorted by a counselor about 200 yards to relieve himself. Two counselors then attempted to lead him back to the campsite. According to an account of the incident, when he continued to refuse, they tried to force him to return and they all fell to the ground together. The two counselors subsequently held the victim face down in the dirt until he stopped struggling; by one account a counselor sat on the victim for almost 45 minutes. When the counselors realized the victim was no longer breathing, they telephoned for help and requested a 9-1-1 operator’s advice on administering CPR. The victim’s mother told us that she found out about the situation when program staff called to tell her that her son was being airlifted to a medical center. Shortly afterwards, a nurse called and urged her to come to the hospital with her husband. They were not able to make it in time—on the drive to the hospital, her son’s doctor called, advised her to pull to the side of the road, and informed her that her son had died. The victim’s mother told us that she was informed, after the autopsy, that the main artery in her son’s neck had been torn. The cause of death was listed as a homicide. In September 2000, after the boy’s death, one of the counselors was charged with criminally negligent homicide. A grand jury subsequently declined to indict him. The victim’s mother told us that at the grand jury hearing, she found out from parents of other youth in the program that they had been charged different amounts of money for the same program, and that program officials had told them what they wanted to hear about the program’s ability to meet each of their children’s special needs. In early 2001, the mother of the victim filed a $1.5 million wrongful death lawsuit against the program, its parent company, and its president. The lawsuit was settled in 2002 for an undisclosed amount. Due in part to the victim’s death, in early 2002, Oregon implemented its outdoor licensing requirements. The state’s Department of Justice subsequently filed a complaint alleging numerous violations of the state’s Unlawful Trade Practices Act and civil racketeering laws, including charges that the program misrepresented its safety procedures and criminally mistreated enrolled youth. In an incident unconnected to this case, the program was also charged with child abuse related to frostbite. As a result of these complaints, in February of 2002, the program entered into agreement with the state’s attorney general to modify program operations and pay a $5,000 fee. The program continued to work with the State of Oregon throughout 2002 to comply with the agreement. In the summer of 2002, BLM revoked the camping permit for the program due, in part, to the victim’s death. The program closed in December of 2002. The victim was a 14-year-old male. According to his father, in 2001 the victim was enrolled in a private West Virginia residential treatment center and boarding school. He told us that his son had been diagnosed with clinical depression, had attempted suicide twice, was on medication, and was being treated by a psychiatrist. Because their son was having difficulties in his school, the parents—in consultation with their son’s psychiatrist—decided their son would benefit by attending a school that was more sensitive to their son’s problems. To identify a suitable school, the family hired an education consultant who said he was a member of an educational consultants’ association and that he specialized in matching troubled teens with appropriate treatment programs. The parents discussed their son’s personality, medical history (including his previous suicide attempts), and treatment needs with the consultant. According to the father, the consultant “quickly” recommended the West Virginia school. The program was licensed by the state and cost almost $23,000 (or about $255 per day). According to the parents and court documents, the victim committed suicide 6 days into the program. On the day before he killed himself, while participating in the first phase of the program (“survival training”), the victim deliberately cut his left arm four times from wrist to elbow using a pocket knife issued to him by the school. After cutting himself, the victim approached a counselor and showed him what he had done, pleading with the counselor to take the knife away before he hurt himself again. He also asked the counselor to call his mother and tell her that he wanted to go home. The counselor spoke with the victim, elicited a promise from him not to hurt himself again, and gave the knife back. The next evening the victim hung himself with a cord not far from his tent. Four hours passed before the program chose to notify the family about the suicide. When the owner of the program finally called the family to notify them, according to the father, the owner said, “There was nothing we could do.” In the aftermath of the suicide, the family learned that the program did not have any procedures for addressing suicidal behavior even though it had marketed itself as being able to provide appropriate therapy to its students. Moreover, one of the program owners, whom the father considered the head therapist, did not have any formal training to provide therapy. The family also learned that the owner and another counselor had visited their son’s campsite, as previously scheduled, the day he died. During this visit, field staff told them about the self-inflicted injury and statements the victim had made the night before. According to the father, the owner then advised field staff that the victim was being manipulative in an attempt to be sent home, and that the staff should ignore him to discourage further manipulative behavior. The owners and the program were indicted by a grand jury on criminal charges of child neglect resulting in death. According to the transcript, the judge who was assigned to the case pushed the parties not to choose a bench trial to avoid a lengthy and complicated trial. The program owner pleaded no contest to the charge of child neglect resulting in death with a fine of $5,000 in exchange for dismissal of charges. The state conducted an investigation into the circumstances and initially planned to close the program. However, the program owners negotiated an agreement with the state not to shut down the program in exchange for a change of ownership and management. According to the victim’s father, the family of the victim subsequently filed a civil suit and a settlement was reached for $1.2 million, which included the owners admitting and accepting personal responsibility for the suicide. This program remains open and operating. Within the last 18 months, a group of investors purchased the program and are planning to open and operate other programs around the country, according to the program administrators with whom we spoke. As part of our work we also learned that the program has a U.S. Forest Service permit however, because it has not filed all required usage reports nor paid required permit fees in almost 8 years, it is in violation of the terms of the permit. We estimate that the program owes the U.S. Forest Service tens of thousands of dollars, although we could not calculate the actual debt. The victim was a 14-year-old male. According to police documents, the victim’s mother enrolled him in a military-style Arizona boot camp in 2001 to address behavioral problems. The mother told us that she “thought it would be a good idea.” In addition, she told us that her son suffered from some hearing loss, a learning disability, Attention Deficit Hyperactivity Disorder (ADHD), and depression. To address these issues her son was taking medication and attending therapy sessions. According to the mother, her son’s therapist had recommended the program, which he described as a “tough love” program and “what needed.” The mother said she trusted the recommendation of her son’s therapist; in addition, she spoke with other parents who had children in the program, who also recommended the program to her. She initially enrolled her son in a daytime Saturday program in the spring of 2001 so he could continue attending regular school during the week. Because her son continued to have behavioral problems, she then enrolled him in the program’s 5-week summer camp, which she said cost between $4,600 and $5,700 (between $131 and $162 per day). Her understanding was that strenuous program activities took place in the evening and that during the day youth would be in the shade. Police documents indicate about 50 youth between the ages of 6 and 17 were enrolled in the summer program. According to police, youth were forced to wear black clothing and to sleep in sleeping bags placed on concrete pads that had been standing in direct sunlight during the day. Both black clothing and concrete absorb heat. Moreover, according to documents subsequently filed by the prosecutor, youth were fed an insufficient diet of a single apple for breakfast, a single carrot for lunch, and a bowl of beans for dinner. On the day the victim died, the temperature was approximately 113 degrees Fahrenheit, according to the investigating detective. His report stated that on that day, the program owner asked whether any youth wanted to leave the program; he then segregated those who wanted to leave the program, which included the victim, and forced them to sit in the midday sun for “several hours” while the other participants were allowed to sit in the shade. Witnesses said that while sitting in the sun, the victim began “eating dirt because he was hungry.” Witnesses also stated that the victim “had become delirious and dehydrated… saw water everywhere, and had to ‘chase the Indians.’” Later on the victim appeared to have a convulsive seizure, but the camp staff present “felt he was faking,” according to the detective’s report. One staff member reported that the victim had a pulse rate of 180, more than double what is considered a reasonable resting heart rate for a teenager. The program owner then directed two staff and three youth enrolled in the program to take the victim to the owner’s room at a nearby motel to “cool him down and clean up.” They placed the victim in the flatbed of a staff member’s pickup truck and drove to the motel. Over the next several hours, the following series of events occurred. In the owner’s hotel room, the limp victim was stripped and placed into the shower with the water running. The investigating detective told us that the victim was left alone for 15 to 20 minutes for his “privacy.” During this time, one of the two staff members telephoned the program owner about the victim’s serious condition; the owner is said to have told the staff person that “everything will be okay.” However, when staff members returned to the bathroom they saw the victim facedown in the water. The victim had defecated and vomited on himself. After cleaning up the victim, a staff member removed him from the shower and placed him on the hotel room floor. Another staff member began pressing the victim’s stomach with his hands, at which point, according to the staff member’s personal account, mud began oozing out of the victim’s mouth. The staff member then used one of his feet to press even harder on the victim’s stomach, which resulted in the victim vomiting even more mud and a rock about the size of quarter. At this point, a staff member again called the owner to say the boy was not responding; the owner instructed them to take the victim back to the camp. They placed the victim in the flatbed of the pickup truck for the drive back. Staff placed the victim on his sleeping bag upon returning to camp. He was reportedly breathing at this time, but then stopped breathing and was again put in the back of the pickup truck to take him for help. However, one staff member expressed his concern that the boy would die unless they called 9-1-1 immediately. The county sheriff’s office reported receiving a telephone call at approximately 9:43 p.m. that evening saying a camp participant “had been eating dirt all day, had refused water, and was now in an unconscious state and not breathing.” This is the first recorded instance in which the program owner or staff sought medical attention for the victim. Instructions on how to perform CPR were given and emergency help was dispatched. The victim was pronounced dead after being airlifted to a local medical center. The medical examiner who conducted the autopsy expressed concern that the victim had not been adequately hydrated and had not received enough food while at the camp. His preliminary ruling on the cause of death was that “of near drowning brought on by dehydration.” After a criminal investigation was conducted, the court ultimately concluded that there was “clear and convincing evidence” that program staff were not trained to handle medical emergencies related to dehydration and lack of nutrition. The founder (and chief executive officer) of the program was convicted in 2005 of felony reckless manslaughter and felony aggravated assault and sentenced to 6-year and 5- year terms, respectively. He was also ordered to pay over $7,000 in restitution to the family. In addition, program staff were convicted of various charges, including trespassing, child abuse, and negligent homicide but were put on probation. According to the detective, no staff member at the camp was trained to administer medication or basic medical treatment, including first aid. The mother filed a civil suit that was settled for an undisclosed amount of money. The program closed in 2001. The victim was a 16-year-old female. Because of defiant, violent behavior, her parents enrolled her in a Utah wilderness and boarding school program in 2001, which was a state-licensed program for youth 13 to 18 years old. The 5 month program cost around $29,000 (or about $193 per day) and operated on both private and federal land. The parents also hired a transport service at a cost of over $3,000 to take their daughter to the program. We found that the director and another executive of this wilderness program had both worked at the same program discussed in our second case and the executive owned the program discussed in our first case. According to program documents and the statements of staff members, a group hiking in this program would normally require three staff—one in front leading the hike, one in the middle of the group, and one at the end of the group. However, this standard structure had been relaxed on the day the victim fell. It was Christmas Day, and only one staff member accompanied four youth. While hiking in a steep and dangerous area that staff had not previously scouted out, the victim ran ahead of the group with two others, slipped on a steep rock face, and fell more than 50 feet into a crevasse according to statements of the other two youth—one of whom ran back to inform the program staff of the accident. The staff radioed the base camp to report the accident, then called 9-1-1. One of the staff members at the accident scene was an emergency medical technician (EMT) and administered first aid. However, in violation of the program licensing agreement, the first aid kit they were required to have with them had been left at the base camp. An ambulance arrived about 1 hour after the victim fell. First responders decided to have the victim airlifted to a medical center, but the helicopter did not arrive until about 1-1/2 hours after they made the decision to call for an airlift. According to the coroner’s report, the victim died about 3 weeks later in a hospital without ever regaining consciousness. She had suffered massive head trauma, a broken arm, broken teeth, and a collapsed lung. As a result of the death, the state planned to revoke the program’s outdoor youth program license based on multiple violations. In addition to an inappropriate staff-to-child ratio (four youth for one staff member, rather than three to one), failure to prescreen the hiking area, and hiking without a first aid kit, the state identified the following additional license violations: Program management did not have an emergency or accident plan in place. Two of the four staff members who escorted the nine youth in the wilderness had little experience—one had 1 month of program experience and the other had 9 days. Neither of them had completed the required staff training. The two most senior staff members on the trip had less than 6 months of wilderness experience—but they remained at the camp while other two inexperienced staff members led the hike. A lawsuit filed by the family in November 2002 claims that the program did not take reasonable measures to keep the youth in the program safe, especially given the “hiking inexperience” of the youth and the “insufficient number of staff.” Specifically, the suit claims that the program’s executive director waited for an hour before calling assistance after the victim fell. Additionally, the suit claims that staff only had one radio and no medical equipment or emergency plan. The parents filed an initial lawsuit for $6 million but eventually settled in 2003 for $200,000 before attorneys’ fees and health insurance reimbursement were taken out. The program closed in May 2002 due to fiscal insolvency. However, its parent program—a boarding school licensed by the state—is still in operation. We have not been able to determine whether the wilderness director at the time of the victim’s death is still in the industry. However, the other program executive remains in the industry, working as a referral agent for parents seeking assistance in identifying programs for troubled youth. The victim, who died in 2002, was a 15-year-old female. The parents of the victim told us that she suffered from depression, suicidal thoughts, and bipolar disorder. She also reportedly had a history of drug use, including methamphetamines, marijuana, and cocaine. Her parents explained that they selected a program after researching several programs and consulting with an educational advisor. Although the program was based in Oregon, it operated a 3-week wilderness program in Nevada, which was closer to the family home. The total cost of the program was over $9,200 (or about $438 per day), which included a nonrefundable deposit and over $300 for equipment. The parents of the victim drove their daughter several hundred miles to enroll her in the program. Because of the distance involved, they stayed overnight in a motel nearby. The next day, when the parents arrived home, they found a phone message waiting for them—it was from the program, saying that their daughter had been in an accident and that she was receiving CPR. According to documents we reviewed, three staff members led seven students on a hike on the first day of the program. The victim fell several times while hiking. The last time she fell, she lost muscle control and had difficulty breathing. The EMT on the expedition had recently completed classroom certification and had no practical field experience. While the staff called for help, the EMT and other staff began CPR and administered epinephrine doses to keep her heart beating during the 3 hours it took a rescue helicopter to arrive. The victim was airlifted to a nearby hospital where she was pronounced dead. The victim’s death was ruled an accident by the coroner—heat stroke complicated by drug-induced dehydration. According to other youth on the hike, they were aware the victim had taken methamphetamines prior to the hike. The victim had had a drug screening done 1 week before entering the program; she tested positive for methamphetamine, which the program director knew but the staff did not. However, the program did not make a determination whether detoxification was necessary, which was required by the state where the program was operating (Nevada), according to a court document. The victim was also taking prescribed psychotropic medications, which affected her body’s ability to regulate heat and remain hydrated. At the time the victim died, this private wilderness treatment program had been in operation for about 15 years in Oregon. Although it claimed to be accredited by the Joint Commission on Heath Care Organizations, this accreditation covered only the base program—not the wilderness program or its drug and alcohol component in which the victim participated. Moreover, even though the wilderness program attended by the victim had been running for 2 years, it was not licensed to operate in Nevada. The district attorney’s office declined to file criminal child abuse and neglect charges against two program counselors, although those charges had been recommended by investigating officers. The parents of the victim were never told why criminal charges were never filed. They subsequently filed a civil lawsuit and settled against the program for an undisclosed sum. Two other deaths occurred in this program shortly after the first—one resulted from a previously unknown heart defect and the other from a fallen tree. Although the wilderness program had a federal permit to operate in Nevada, it was not licensed by that state. After the death, that state investigated and ordered the program closed. The parent company had (and continues to maintain) state licenses in Oregon to operate as a drug and alcohol youth treatment center, an outpatient mental health facility, and an outdoor youth facility, as well as federal land permits from BLM and the U.S. Forest Service. According to program officials, the program has modified its procedures and policies—it no longer enrolls youth taking the medication that affected the victim’s ability to regulate her body temperature. The victim was a 14-year-old male who died in July 2002. According to documents we reviewed, the mother of the victim placed her son in this Utah wilderness program to correct behavioral problems. The victim kept a journal with him during his stay at the program. It stated that he had ADHD and bipolar disorder. His enrollment form indicates that he also had impulse control disorder and that he was taking three prescription medications. His physical examination, performed about 1 month before he entered the program, confirms that he was taking these medications. We could not determine how much the program cost at the time. According to documents we reviewed, the victim had been in the program for about 8 days when, on a morning hike on BLM land, he began to show signs of hyperthermia (excessively high body temperature). He sat down, breathing heavily and moaning. Two staff members, including one who was an EMT, initially attended to him, but they could not determine if he was truly ill or simply “faking” a problem to get out of hiking. When the victim became unresponsive and appeared to be unconscious, the staff radioed the program director to consult with him. The director advised the staff to move the victim into the shade. The director also suggested checking to see whether the victim was feigning unconsciousness by raising his hand and letting go to see whether it dropped onto his face. They followed the director’s instructions. Apparently, because the victim’s hand fell to his side rather than his face, the staff member who was an EMT concluded that the victim was only pretending to be ill. While the EMT left to check on other youth in the program, a staff member reportedly hid behind a tree to see whether the victim would get up— reasoning that if the victim were faking sickness, he would get up if he thought nobody was watching. As the victim lay dying, the staff member hid behind the tree for 10 minutes. He failed to see the victim move after this amount of time, so he returned to where the victim lay. He could not find a pulse on the victim. Finally realizing that he was dealing with a medical emergency, the staff member summoned the EMT and they began CPR. The program manager was contacted, and he called for emergency help. Due to difficult terrain and confusion about the exact location of the victim, it took over an hour for the first response team to reach the victim. An attempt to airlift the victim was canceled because a rescue team determined that the victim was already dead. According to the coroner’s report, the victim died of hyperthermia. State Department of Human Services officials initially found no indication that the program had violated its licensing requirements, and the medical examiner could not find any signs of abuse. Subsequently, the Department of Human Services ruled that there were, in fact, licensing violations, and the state charged the program manager and the program owner with child abuse homicide (a second degree felony charge). The program manager was found not guilty of the charges; additionally, it was found that he did not violate the program’s license regarding water, nutrition, health care, and other state licensing requirements. Moreover, the court concluded that the State did not prove that the program owner engaged in reckless behavior. Later that year, however, an administrative law judge affirmed the Department of Human Services’ decision to revoke the program’s license after the judge found that there was evidence of violations. The owner complied with the judge and closed the program in late 2003. About 16 months later, the owner applied for and received a new license to start a new program. According to the Utah director of licensing, as of September 2007, there have been “no problems” with the new program. We could not find conclusive information as to whether the parents of the victim filed a civil case and, if so, what the outcome was. The victim was a 15-year-old male. According to investigative reports compiled after his death, the victim’s grades dropped during the 2003–2004 school year and he was withdrawing from his parents. His parents threatened to send him to a boarding or juvenile detention facility if he did not improve during summer school in 2004. The victim ran away from home several times that summer, leading his frustrated parents to enroll him in a boot camp program. When they told him about the enrollment, he ran away again—the day before he was taken to the program in a remote area of Missouri. The 5-month program describes itself as a boot camp and boarding school. Because it is a private facility, the state in which it is located does not require a license. According to Internet documents, the program costs almost $23,000 (or about $164 per day). Investigative documents we reviewed indicate that at the time the parents enrolled the teenager, he did not have any issues in his medical history. Staff logs indicate that the victim was considered to be a continuous problem from the time he entered the program—he did not adhere to program rules and was otherwise noncompliant. By the second day of the boot camp phase of the program, staff noticed that the victim exhibited an oozing bump on his arm. School records and state investigation reports showed that the victim subsequently began to complain of muscle soreness, stumbled frequently, and vomited. As days passed, students noticed the victim was not acting normally, and reported that he defecated involuntarily on more than one occasion, including in the shower. Staff notes confirmed that the victim defecated and urinated on himself numerous times. Although he was reported to have fallen frequently and told staff he was feeling weak or ill, the staff interpreted this as being rebellious. The victim was “taken down”—forced to the floor and held there—on more than one occasion for misbehaving, according to documents we reviewed. Staff also tied a 20-pound sandbag around the victim’s neck when he was too sick to exercise, forcing him to carry it around with him and not permitting him to sit down. Staff finally placed him in the “sick bay” in the morning on the day that he died. By midafternoon of that day, a staff member checking on him intermittently found the victim without a pulse. He yelled for assistance from other staff members, calling the school medical officer and the program owners. A responding staff member began CPR. The program medical officer called 9-1-1 after she arrived in the sick bay. An ambulance arrived about 30 minutes after the 9-1-1 call and transported the victim to a nearby hospital, where he was pronounced dead. The victim died from complications of rhabdomyolysis due to a probable spider bite, according to the medical examiner’s report. A multiagency investigation was launched by state and local parties in the aftermath of the death. The state social services’ abuse investigation determined that staff did not recognize the victim’s medical distress or provide adequate treatment for the victim’s bite. Although the investigation found evidence of staff neglect and concluded that earlier medical treatment may have prevented the death of the victim, no criminal charges were filed against the program, its owners, or any staff. The state also found indications that documents submitted by the program during the investigation may have been altered. The family of the victim filed a civil suit against the program and several of its staff in 2005 and settled out of court for $1 million, according to the judge. This program is open and operating. The tuition is currently $4,500 per month plus a $2,500 “start-up fee.” The program owner claims to have 25 years of experience working with children and teenagers. Members of her family also operate a referral program and a transport service out of program offices located separately from the actual program facility. During the course of our review, we found that current and former employees with this program filed abuse complaints with the local law enforcement agency but that no criminal investigation has been undertaken. Mr. Chairman and Members of the Committee, this concludes my statement. We would be pleased to answer any questions that you may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Residential treatment programs provide a range of services, including drug and alcohol treatment, confidence building, military-style discipline, and psychological counseling for troubled boys and girls with a variety of addiction, behavioral, and emotional problems. This testimony concerns programs across the country referring to themselves as wilderness therapy programs, boot camps, and academies, among other names. Many cite positive outcomes associated with specific types of residential treatment. There are also allegations regarding the abuse and death of youth enrolled in residential treatment programs. Given concerns about these allegations, particularly in reference to private programs, the Committee asked the General Accountability Office (GAO) to (1) verify whether allegations of abuse and death at residential treatment programs are widespread and (2) examine the facts and circumstances surrounding selected closed cases where a teenager died while enrolled in a private program. To achieve these objectives, GAO conducted numerous interviews and examined documents from closed cases dating as far back as 1990, including police reports, autopsy reports, and state agency oversight reviews and investigations. GAO did not attempt to evaluate the benefits of residential treatment programs or verify the facts regarding the thousands of allegations it reviewed. GAO found thousands of allegations of abuse, some of which involved death, at residential treatment programs across the country and in American-owned and American-operated facilities abroad between the years 1990 and 2007. Allegations included reports of abuse and death recorded by state agencies and the Department of Health and Human Services, allegations detailed in pending civil and criminal trials with hundreds of plaintiffs, and claims of abuse and death that were posted on the Internet. For example, during 2005 alone, 33 states reported 1,619 staff members involved in incidents of abuse in residential programs. GAO could not identify a more concrete number of allegations because it could not locate a single Web site, federal agency, or other entity that collects comprehensive nationwide data. GAO also examined, in greater detail, 10 closed civil or criminal cases from 1990 through 2004 where a teenager died while enrolled in a private program. GAO found significant evidence of ineffective management in most of the 10 cases, with program leaders neglecting the needs of program participants and staff. This ineffective management compounded the negative consequences of (and sometimes directly resulted in) the hiring of untrained staff; a lack of adequate nourishment; and reckless or negligent operating practices, including a lack of adequate equipment. These factors played a significant role in the deaths GAO examined. |
State and local governments generally have the principal responsibility for meeting mass care and other needs in responding to a disaster; however, governments largely carry out this responsibility by relying on the services provided by voluntary organizations. Voluntary organizations provide sheltering, feeding, and other services, such as case management, to disaster victims and have long supported local, state, and federal government responses to disasters. Voluntary organizations have historically played a critical role in providing services to disaster victims, both on a routine basis—in response to house fires and local flooding, for example—and in response to far rarer disasters such as devastating hurricanes or earthquakes. Their assistance can vary from providing immediate services to being involved in long-term recovery efforts, including fund-raising. Some are equipped to arrive at a disaster scene and provide immediate mass care, such as food, shelter, and clothing. Other charities address short-term needs, such as providing case management services to help disaster victims obtain unemployment or medical benefits. Other voluntary organizations provide long-term disaster assistance such as job training or temporary housing assistance for low- income families. In addition, local organizations that do not typically provide disaster services may step in to address specific needs, as occurred when churches and other community organizations began providing sheltering after the Gulf Coast hurricanes. The American Red Cross, a nongovernmental organization founded in 1881, is the largest of the nation’s mass care service providers. Operating under a congressional charter since 1900, the Red Cross provides volunteer humanitarian assistance to the armed forces, serves as a medium of communication between the people of the United States and the armed forces, and provides direct services to disaster victims, including feeding, sheltering, financial assistance, and emergency first aid. An additional key player in the voluntary sector is NVOAD, an umbrella organization of nonprofits that are considered national in their scope. Established in 1970, NVOAD is not itself a service delivery organization but rather coordinates planning efforts by many voluntary organizations responding to disaster, including the five organizations in this review. In addition to its 49 member organizations, NVOAD also coordinates with chartered state Voluntary Organizations Active in Disaster (VOAD) and their local affiliates. (See app. II for NVOAD members.) The occurrence in 2005 of Hurricanes Katrina and Rita revealed many weaknesses in the federal disaster response that were subsequently enumerated by numerous public and private agencies—including the GAO, the White House, and the American Red Cross. These weaknesses included a lack of clarity in roles and responsibilities among and between voluntary organizations and FEMA and a need for the government to include voluntary organizations in national and local disaster planning. According to several post-Katrina reports, the contributions of voluntary organizations, especially faith-based groups, had not been effectively integrated into the earlier federal plan for disaster response—the 2004 National Response Plan. These reports called for better coordination among government agencies and voluntary organizations through cooperative relationships and joint planning and exercises. (See bibliography.) Under the Homeland Security Act, which President Bush signed in 2002, as amended by the Post-Katrina Emergency Management Reform Act of 2006 (Post-Katrina Act), FEMA has been charged with responsibility for leading and supporting a national, risk-based, comprehensive emergency management system of preparedness, protection, response, recovery, and mitigation. In support of this mission, FEMA is required to partner with the private sector and nongovernmental organizations, as well as state, local, tribal governments, emergency responders, and other federal agencies. Under the act, FEMA is specifically directed, among other things, to build a comprehensive national incident management system; consolidate existing federal government emergency response plans into a single, coordinated national response plan; administer and ensure the implementation of that plan, including coordinating and ensuring the readiness of each emergency support function under the plan; and update a national preparedness goal and develop a national preparedness system to enable the nation to meet that goal. As part of its preparedness responsibilities, FEMA is required to develop guidelines to define risk-based target capabilities for federal, state, local, and tribal preparedness and establish a comprehensive assessment system to assess, on an ongoing basis, the nation’s prevention capabilities and overall preparedness. FEMA is also required to submit annual reports which describe, among other things, the results of the comprehensive assessment and state and local catastrophic incident preparedness. FEMA may also use planning scenarios to reflect the relative risk requirements presented by all kinds of hazards. As we noted in previous reports and testimony, the preparation for a large-scale disaster requires an overall national preparedness effort designed to integrate what needs to be done (roles and responsibilities), how it should be done, and how well it should be done. The principal national documents designed to address each of these questions are the National Response Framework, the National Incident Management System, and the National Preparedness Guidelines. A core tenet of these documents is that governments at all levels, the private sector, and nongovernmental organizations, such as the Red Cross and other voluntary organizations, coordinate during disasters that require federal intervention. (See fig. 1.) DHS’s National Response Framework, which became effective in March 2008, delineates roles for federal, state, local, and tribal governments; the private sector; and voluntary organizations in responding to disasters. The new framework revises the National Response Plan, which was originally signed by major federal government agencies, the Red Cross, and NVOAD in 2004. Under the National Response Framework, voluntary organizations are expected to contribute to these response efforts through partnerships at each level of government. In addition, FEMA, in conjunction with its voluntary agency liaisons, acts as the interface between these organizations and the federal government. (See fig. 2.) The Framework also creates a flexible and scalable coordinating structure for mobilizing national resources in a large-scale disaster. Under the Framework, local jurisdictions and states have lead responsibility for responding to a disaster and can request additional support from the federal government as needed. In addition, for catastrophic incidents that almost immediately overwhelm local and state resources and result in extraordinary levels of mass casualties or damage, the Framework—through its Catastrophic Incident Supplement—specifies the conditions under which the federal government can proactively accelerate the national response to such disasters without waiting for formal requests from state governments. The Supplement was published in 2006 after Hurricane Katrina. The National Framework organizes the specific needs that arise in disaster response into 15 emergency support functions, or ESFs. Each ESF comprises a coordinator, a primary agency, and support agencies—usually governmental agencies—that plan and support response activities. Typically, support agencies have expertise in the respective function, such as in mass care, transportation, communication, or firefighting. In a disaster, FEMA is responsible for activating the ESF working groups of key federal agencies and other designated organizations that are needed. For the voluntary organizations in our review, Emergency Support Function 6 (ESF-6) is important because it outlines the organizational structure used to provide mass care and related services in a disaster. These services are mass care (e.g., sheltering, feeding, and bulk distribution of emergency emergency assistance (e.g. evacuation, safety, and well-being of pets), disaster housing (e.g., roof repair, rental assistance), and human services (e.g., crisis counseling, individual case management). Under ESF-6, FEMA is designated as the primary federal agency responsible for coordinating and leading the federal response for mass care and related human services, in close coordination with states and others such as voluntary organizations—a role change made in 2008 in response to issues that arose during Katrina. FEMA carries out this responsibility by convening federal ESF-6 support agencies during disasters and coordinating with states to augment their mass care capabilities as needed. Under ESF-6, the Red Cross and NVOAD are each named as support agencies to FEMA, along with numerous federal departments, such as the Department of Health and Human Services. FEMA’s voluntary agency liaisons, located in FEMA regions, are largely responsible for carrying out these coordinating duties with voluntary organizations. As private service providers fulfilling their humanitarian missions, the voluntary organizations in our review have historically served as significant sources of mass care and other services in large-scale disasters and play key roles in national response—in coordination with local, state, and federal governments—under the National Response Framework. While their response structures differ in key ways—with some having more centralized operations than others, for example—these voluntary organizations coordinate their services through formal written agreements and through informal working relationships with other organizations. In recognition of their long-standing leadership in providing services to disaster victims, these organizations, especially the American Red Cross and NVOAD, have considerable roles in supporting FEMA under the nation’s National Response Framework. While this new Framework shifted the Red Cross from a primary agency for mass care to a support agency, largely because the Red Cross cannot direct federal resources, the 2006 Catastrophic Incident Supplement has not been updated to reflect this change. FEMA does not currently have a timetable for revising the Supplement, as required under the Post-Katrina Act, and while FEMA and Red Cross officials told us that they have a mutual understanding of the Red Cross’s role as a support agency in a catastrophic disaster, this understanding is not currently documented. While the major national voluntary organizations in our review differ in their types of services and response structures, they have all played important roles in providing mass care and other services, some for over a century. According to government officials and reports on the response to Katrina, the Red Cross and the other voluntary organizations we reviewed are a major source of mass care and other disaster services, as was evident in the response to Hurricane Katrina. The five voluntary organizations we reviewed differ in the extent to which they focus on providing disaster services and in the types of services they provide. Four of the five organizations directly provide a variety of mass care and other services, such as feeding and case management, while the fifth—the United Way—focuses on fund-raising for other organizations. As the nation’s largest disaster response organization, the Red Cross is the only one of the five in our review the core mission of which is to provide disaster response services. In providing its services, the Red Cross typically coordinates with state and local governments to support their response and has formal agreements with state or local emergency management agencies to provide mass care and other disaster services. For example, the Red Cross serves as a support agency in the Washington, D.C., disaster response plan for mass care, feeding, and donations and volunteer management. In contrast to the Red Cross, The Salvation Army, the Southern Baptist Convention, and Catholic Charities are faith-based organizations that provide varying types and degrees of disaster services – some for decades—as an extension of their social and community service missions. The United Way raises funds for other charities and provides resources to local United Way operations, but does not directly provide services to survivors in response to disasters. (See table 1.) While voluntary organizations have traditionally played an important role in large-scale disasters, their role in response to Hurricane Katrina, the largest natural disaster in U.S. history, was even more significant, especially for the three mass care service providers in our study—the Red Cross, The Salvation Army, and the Southern Baptist Convention. For example, after Katrina, the Red Cross provided more than 52.6 million meals and snacks and opened more than 1,300 shelters across 27 states, while the Southern Baptist Convention provided more than 14.6 million meals and The Salvation Army provided 3.8 million articles of clothing. While Catholic Charities USA and its affiliates do not generally provide mass care services, during Katrina it assisted with feeding by donating food. (See table 2.) The four direct service providers in our study—the Red Cross, The Salvation Army, the Southern Baptist Convention, and Catholic Charities—each have distinct disaster response structures, with their national offices having different levels of authority over the organization’s affiliates and resources, reflecting a continuum from more centralized operations, such as the Red Cross, to more decentralized operations, such as Catholic Charities USA. For example, in a large-scale disaster, the national office of the Red Cross directly sends headquarters-based trained staff, volunteers, and equipment to the affected disaster site, while Catholic Charities USA’s disaster response office provides technical assistance to the affected member dioceses but does not direct resources. (See table 3.) Similarly, to facilitate its ability to direct a nationwide response from headquarters, the Red Cross has a national headquarters and service area staff of about 1,600 as of May 2008, maintains a 24/7 disaster operations center at its headquarters, and has a specially trained cadre of over 71,000 volunteers who are nationally deployable, according to the Red Cross. In contrast, the Southern Baptist Convention and Catholic Charities each have 1 or 2 staff at their national offices who are responsible for disaster response coordination for their organizations. These differences in the national offices’ roles within the voluntary organizations means that when voluntary organizations respond to disasters of increasing magnitude by “ramping up”—a process similar to the scalable response described in the National Response Framework— they do so in different ways and to different extents. While the voluntary organizations in our review coordinate with one another and with the government, their disaster response structures are not necessarily congruent with the response structures of other voluntary organizations or aligned geographically or jurisdictionally with those of government. In essence, the voluntary organizations’ response structures do not necessarily correspond to the local, state, and federal structures of response—as described in the National Framework. For example, The Salvation Army and Catholic Charities are not aligned geographically with states, while the Southern Baptist Convention is aligned roughly along state lines, called state conventions, and the Red Cross’s organizational structure supports regional chapter groupings, which are also aligned generally by state. Furthermore, while the Red Cross and The Salvation Army have regional or larger territorial units, these are not necessarily congruent with FEMA’s 10 regions. (See table 4). In a similar vein, these service providers do not necessarily follow the command and control structure typical of the federal incident command system set forth in the National Incident Management System (NIMS) for unifying disaster response. These organizations vary in the extent to which they have adopted this command system, according to officials we spoke with. For example, organization officials told us that the Red Cross, The Salvation Army, and the Southern Baptist Convention use this command system, while Catholic Charities does not. The voluntary organizations in our review coordinate and enhance their service delivery through formal written agreements at the national level. While not all of the voluntary organizations have such agreements with each other, the Red Cross maintains mutual aid agreements with the national offices of The Salvation Army, the Southern Baptist Convention, and Catholic Charities USA, as well as 39 other organizations with responsibilities under ESF-6. For example, under a 2000 agreement between the Red Cross and the Southern Baptist Convention, a feeding unit addendum describes operations and financial responsibilities when the two organizations provide mass feeding services cooperatively. According to Southern Baptist Convention officials, the general premise of this agreement is that the Convention will prepare meals in its mobile feeding units, while the Red Cross will distribute these meals using its emergency response vehicles. According to many of the voluntary organization officials we interviewed, another essential ingredient for response is to have active, informal working relationships with leaders of other organizations that are well established before disasters strike. These relationships are especially important when organizations do not have formal written agreements or when the agreements do not necessarily represent the current relationship between two organizations. Regular local VOAD meetings and joint training exercises with local and state governments facilitate these working relationships by providing an opportunity for relationship building and informal communication. For example, a Florida catastrophic planning exercise in 2006-2007 brought together 300 emergency management professionals and members of the Florida VOAD to develop plans for two types of catastrophic scenarios. According to disaster officials, relationships built through this type of interaction allow participants to establish connections that can be drawn upon during a disaster. The National Response Plan that was instituted after September 11, and the 2008 National Response Framework, which superseded it, both recognized the key role of the Red Cross and NVOAD member organizations in providing mass care and other services by giving the Red Cross and NVOAD responsibilities under the ESF-6 section of the Framework. The 2008 National Response Framework, which revised the National Response Plan, clarified some aspects of the Red Cross’s role that had been problematic during the Katrina response. Under the 2008 ESF-6 section of the Framework, the Red Cross has a unique federally designated role as a support agency to FEMA for mass care. As noted in our recent report, the Red Cross was previously designated as the primary agency for mass care under ESF-6 in the 2004 National Response Plan, but the Red Cross’s role was changed under the 2008 Framework to that of a support agency. This role change was made in large part because FEMA and the Red Cross agreed—in response to issues that arose during Katrina—that the primary agency responsible for coordinating mass care nationwide needs to be able to direct federal resources. As a support agency under ESF-6, the Red Cross helps FEMA and the states coordinate mass care activities in disasters. In particular the Red Cross is charged with providing staff and specially trained liaisons to work at FEMA’s regional offices and other locations, and providing subject matter expertise on mass care planning, preparedness, and response. In addition, the Red Cross is expected to take the lead in promoting cooperation and coordination among government and national voluntary organizations that provide mass care during a disaster, although it does not direct other voluntary organizations in this role. (See fig. 3.) ESF-6 also acknowledges the Red Cross’s separate role as the nation’s largest mass care service provider, which is distinct from its role under the Framework. When providing mass care services, the Red Cross acts on its own behalf and not on behalf of the federal government, according to the ESF-6. In recent months, the Red Cross has reported a significant budget deficit that has led it to substantially reduce its staff, including those assigned to FEMA and its regional offices, and to seek federal funding for its ESF-6 responsibilities—a major policy shift for the organization. According to Red Cross officials, the Red Cross has experienced major declines in revenues in recent years, and the organization reported a projected operating budget deficit, for fiscal year 2008, of about $150 million. To address this shortfall, in early 2008 the Red Cross reduced the number of its staff by about 1,000, with most of these staffing cuts made at its national headquarters and in service areas, in departments that support all Red Cross functions, such as information technology, human resources, and communications. These cuts included eliminating its full-time staff at FEMA’s 10 regional offices and reducing staff that supported state emergency management agencies from 14 to 5. While it is too soon to tell the impact of these changes, Red Cross officials we spoke with told us these staffing cutbacks will not affect its ability to provide mass care services. For example, several positions were also added to its Disaster Services unit to support local chapters’ service delivery, according to Red Cross data, including area directors and state disaster officers—a new position at the Red Cross. However, with regard to its ESF-6 responsibilities, Red Cross officials also said that while the organization will continue to fulfill its ESF-6 responsibilities, it is changing the way it staffs FEMA’s regional offices during disasters by assigning these responsibilities, among others, to state disaster officers and using trained volunteers to assist in this role. According to the Red Cross, its costs for employing a full-time staff person in each FEMA regional office and for staffing its headquarters to support federal agencies during disasters is $7 million annually, for an operation that the Red Cross says is no longer sustainable. Consequently, in May 2008 testimony before the Senate Committee on Homeland Security and Governmental Affairs, the Red Cross requested that Congress authorize and appropriate funding to cover these positions and responsibilities under the ESF-6. In addition, the Red Cross requested $3 million to assist it in funding its role of integrating the mass care services provided by the nongovernmental sector, for a total of $10 million requested. In addition to the Red Cross, NVOAD is also designated as a support agency under the 2008 ESF-6 section of the Framework, as it was in the previous national plan. In its role as a support agency for mass care, NVOAD is expected to serve as a forum enabling its member organizations to share information, knowledge, and resources throughout a disaster; it is also expected to send representatives to FEMA’s national response center to represent the voluntary organizations and assist in disaster coordination. A new element in the 2008 ESF-6 is that voluntary organizations that are members of NVOAD are also specifically cited in ESF-6 under NVOAD, along with descriptions of their services or functions in disaster response. According to NVOAD and FEMA officials, listing the individual NVOAD members and their services in the ESF-6 does not change organizations’ expected roles or create any governmental obligations for these organizations to respond in disasters, but rather recognizes that NVOAD represents significant resources available through the membership of the voluntary organizations. While the Red Cross’s role for ESF-6 has been changed from that of a primary agency under the National Response Plan to that of a support agency under the new Framework, the Catastrophic Incident Supplement still reflects its earlier role, requiring the Red Cross to direct federal mass care resources. The Supplement provides the specific operational framework for responding to a catastrophic incident, in accordance with federal strategy. When the Supplement was issued, in 2006, the Red Cross was the primary agency for coordinating federal mass care assistance and support for the mass care section of ESF-6 under the National Response Plan. As previously mentioned, in January 2008 the Red Cross’s role under ESF-6 changed from that of a primary agency to that of a support agency, partly because the Red Cross lacks the authority to direct federal resources. The Supplement has not yet been updated to reflect this recent change in the Red Cross’s role. However, FEMA and Red Cross officials agreed that in a catastrophic incident, the Red Cross would serve as a support agency for mass care—not as the lead agency—and therefore would not be responsible for directing federal resources. According to FEMA, in a catastrophic incident, the management, control, dispensation, and coordination of federal resources will change, shifting this responsibility from the Red Cross to FEMA, so as to be consistent with the National Response Framework and the ESF-6. In addition to describing its ESF-6 support agency responsibilities in a catastrophic disaster, the Supplement lays out the mass care services the Red Cross would provide in a catastrophic disaster—acting as a private organization—and FEMA and Red Cross officials agreed that the Red Cross would continue to provide these services as part of its private mission, regardless of the change to its role in the ESF-6 or any future revisions to the Supplement. The Red Cross’s services and actions as a private service provider are integrated into the Supplement for responding to catastrophic disasters. In an event of catastrophic magnitude, the Red Cross is expected to directly provide mass care services to disaster victims, such as meals and immediate sheltering services to people who are denied access to their homes. The Supplement also includes the Red Cross in a schedule of actions that agencies are expected to automatically take in response to a no-notice disaster, such as a terrorist attack or devastating earthquake. For example, within 2 hours after the Supplement is implemented, the Red Cross is expected to inventory shelter space in a 250-mile radius of the disaster using the National Shelter System, dispatch specially trained staff to assess needs and initiate the Red Cross’s national response, coordinate with its national voluntary organization partners to provide personnel and equipment, and deploy Red Cross kitchens and other mobile feeding units. However, according to the ESF-6, in providing these mass care services, the Red Cross is acting on its own behalf and not on the behalf of the federal government or other governmental entity, and the Supplement similarly states that the Red Cross independently provides mass care services as part of its broad program of disaster relief. According to Red Cross officials, if the Supplement were implemented, the Red Cross would continue providing the same mass care services that it has always provided as a private organization. FEMA officials agreed that its expectations of the services the Red Cross would provide in a catastrophic event have not changed, and that its role as a service provider has not been affected by the changes to the ESF-6. According to FEMA, FEMA will augment the Red Cross’s resources in a catastrophic disaster, and the two organizations are working together to develop a memorandum of agreement to ensure that the Red Cross is provided with adequate federal support for logistics, human resources, and travel in a catastrophic event. Although FEMA is charged with revising the Supplement under the Post- Katrina Reform Act, agency officials told us that the agency does not currently have a time frame for updating the Supplement and does not have an interim agreement documenting FEMA’s and the Red Cross’s understanding of the Red Cross’s role as a support agency under the Supplement. FEMA officials told us that the agency was revising the 2004 Catastrophic Incident Annex—a brief document that establishes the overarching strategy for a national response to this type of incident—but that it does not yet have a time frame for updating the more detailed Supplement, which provides the framework for implementing this strategy, although the agency told us that it is in the process of establishing a review timeline. According to FEMA, future revisions to the Supplement will shift responsibility for directing federal mass care resources from the Red Cross to FEMA, in order to remain consistent with the National Response Framework and ESF-6. Furthermore, FEMA and the Red Cross told us that they have a mutual understanding of the Red Cross’s role as a support agency in a catastrophic disaster. However, this understanding is not currently documented. As the experience in responding to Hurricane Katrina demonstrated, it is important to have a clear agreement on roles and responsibilities. Crafting such agreements in writing ahead of time—before the need to respond to a catastrophic event—would help clarify potentially unknown sources of misunderstanding and communicate this understanding not just to FEMA and the Red Cross, but also to FEMA’s many support agencies for ESF-6 and the Red Cross’s partner organizations in the voluntary sector. There is also precedent for having an interim agreement on changed roles: In 2007, while the National Response Plan was being revised, FEMA and the Red Cross developed an interim agreement on roles and responsibilities that set forth the Red Cross’s shift from primary to support agency. In response to weaknesses in service delivery that became evident during Hurricane Katrina, the American Red Cross, The Salvation Army, the Southern Baptist Convention, and Catholic Charities have acted to expand their service coverage and strengthen key aspects of their structures. The Red Cross has reorganized its chapters and established new partnerships with local community and faith-based organizations, particularly in rural areas with hard-to-reach populations. While Red Cross officials did not expect these improvements to be undermined by the organization’s budget deficit, the effect of recent staff reductions at headquarters and elsewhere remains to be seen. Meanwhile, all four organizations, to varying degrees, have made changes to strengthen their ability to coordinate services by collaborating more on feeding and case management and improving their logistical and communications systems. In recognition of the fact that its service coverage had been inadequate during the 2005 Gulf Coast hurricanes, the Red Cross subsequently reorganized its service delivery structure and initiated or strengthened partnerships with local community organizations—a process that is still ongoing. During Katrina, when approximately 770,000 people were displaced, the Red Cross was widely viewed as not being prepared to meet the disaster’s unprecedented sheltering needs, in part because some areas—particularly rural areas—lacked local chapters or were not offering services; furthermore, the Red Cross had weak relationships with faith- based and other community groups that stepped in during this crisis to assist disaster victims. To address these problems, the Red Cross is implementing two main initiatives: First, to expand and strengthen its service delivery, including its capacity to respond to catastrophic disasters, the Red Cross is reorganizing its field structure by Establishing a more flexible approach to service delivery to accommodate varying needs of diverse communities within the same jurisdiction. According to the Red Cross, the jurisdiction of many chapters consisted of urban, suburban, and rural counties. Previously, chapter services were based on an urban model, but this one-size-fits-all approach, according to the Red Cross, did not well suit the needs and capacities of suburban and rural areas. The Red Cross now differentiates among three service levels, and each chapter can match service levels to the communities within its jurisdiction according to the community’s population density and vulnerability to disasters. As part of this differentiated approach, the chapters also use a mix of methods for providing services—from teams of disaster-trained volunteers to toll-free numbers and the Internet to formal partnerships—depending on the service level needed. Realigning its regional chapter groupings—each consisting of three to eight local chapters—to cover larger geographic areas, additional populations, and better support their local chapters. Regional chapters were established based on factors such as population density, total geographic area, and community economic indicators. According to the Red Cross, streamlining administrative back-office functions, such as human resources and financial reporting, through an organization-wide initiative to reduce duplication will free up chapter resources for service delivery. With this realignment, regional chapters now are expected to provide their local chapters with technical assistance, evaluate local chapters’ overall service delivery capacity, and identify strategies to maximize service delivery, according to the Red Cross. Second, the Red Cross is working to strengthen its local chapters’ relationships with local faith- and community-based organizations so as to help better serve diverse and hard-to-reach populations. During Katrina, the Red Cross lacked such relationships in certain parts of the country, including hurricane-prone areas, and did not consistently serve the needs of many elderly, African-American, Latino, and Asian-American disaster victims and people with disabilities. To remedy this, the Red Cross initiated a new community partnership strategy under which local chapters identify key community organizations as possible disaster response partners and enter into agreements with them on resources to be provided, including reimbursements for costs associated with sheltering disaster victims. The partnership strategy’s goals include improving service to specific communities by overcoming linguistic and cultural barriers; increasing the number of possible facilities for use as shelters, service centers, and warehouses; and enlisting the support of organizations that have relationships with the disabled community. According to Red Cross officials, local chapters around the country have initiated thousands of new partnerships with faith-based and local community organizations. However, because these partnerships are formed at the local chapter level, the national office does not track the exact number of new agreements signed, according to the Red Cross. In addition, the Red Cross has also taken some actions to better address the mass care needs of disaster victims with disabilities—a particular concern during Katrina—although concerns still remain about the nation’s overall preparations for mass care for people with disabilities. For example, the Red Cross developed a shelter intake form to help volunteers determine if a particular shelter can meet an individual’s needs as well as new training programs for staff and volunteers that specifically focus on serving the disabled, as we previously reported. It has also prepositioned items such as cots that can be used in conjunction with wheelchairs in warehouses to improve accessibility to shelters. However, as we reported in February 2008, Red Cross headquarters officials told us that some local chapters were not fully prepared to serve people with disabilities and that it was difficult to encourage local chapters to implement accessibility policies. In the report we also noted that FEMA had hired a disability coordinator to improve mass care services for the disabled, but it had not yet coordinated with the National Council on Disability, as required under the Post-Katrina Act. More specifically, we recommended that FEMA develop a set of measurable action steps, in consultation with the disability council, for coordinating with the council. According to the National Disability Council, while FEMA and the council have met on several occasions to discuss their joint responsibilities under the Post- Katrina Act, FEMA has not yet developed action steps for coordination in consultation with the council. FEMA officials told us they are preparing an update for us on their response to the recommendation. Although the Red Cross recently significantly reduced its staffing levels, the staffing cutbacks were designed to uphold the organization’s delivery of disaster services, according to the Red Cross. Red Cross national officials told us that overall, these and other staffing cuts were designed to leave service delivery intact and that the Red Cross plans to maintain the reorganization of its chapter and service level structure as well as its community partnership initiative. However, since these changes are so recent, it remains to be seen how or whether the cuts and realignment of responsibilities will affect the organization’s post-Katrina efforts to expand and strengthen its service delivery. On the basis of their experiences with large-scale disasters, including Katrina, the national offices, and to some extent the local offices, of the direct service providers in our study reported to varying degrees increasing coordination with each other. In particular, they collaborated more on feeding operations and information sharing and made logistical and communications improvements to prevent future problems, according to organization officials. With regard to mass care services, officials from the national offices of the Red Cross, The Salvation Army, and the Southern Baptist Convention—the three mass care providers in our review—reported increasing their collaboration on delivering mass feeding services. During Katrina, mass care services were duplicated in some locations and lacking in others, partly because voluntary organizations were unable to communicate and coordinate effectively. One reason for this confusion, according to the Southern Baptist Convention, was that many locally based volunteers were unaware that the national offices of the Red Cross and the Southern Baptist Convention had a mutual aid agreement to work with each other on feeding operations and as a result did not coordinate effectively. Since Katrina, the Southern Baptist Convention and the Red Cross have developed a plan to cross-train their kitchen volunteers and combine their core curricula for kitchen training. Similarly, The Salvation Army and the Southern Baptist Convention—who also collaborate on mass feeding services—created a joint training module that cross-trains Southern Baptist Convention volunteers to work in Salvation Army canteens and large Salvation Army mobile kitchens. The two organizations also agreed to continue liaison development. In addition, the voluntary organizations in our study told us that they shared case management information on the services they provide to disaster survivors through the Coordinated Assistance Network (CAN)— which is a partnership among several national disaster relief nonprofit organizations. After September 11, CAN developed a Web-based case management database system that allows participating organizations to reduce duplication of benefits by sharing data about clients and resources with each other following disasters. This system was used in Katrina and subsequent disasters. The Red Cross, The Salvation Army, and the United Way were among the seven original partners that developed and implemented CAN. According to officials from the Red Cross’s national headquarters office, CAN has served as a tool for improving coordination and maintaining consistency across organizations and has also fostered collaboration at the national level among organization executives. An official from Catholic Charities USA told us it has seen a reduction in the duplication of services to clients since it began participating in CAN. Two of the local areas we visited participated in CAN—New York City and Washington, D.C.—and officials from some local voluntary organizations and VOADs in these two cities said they participate in CAN. In New York City, Red Cross officials said CAN was used to support the Katrina victims who were evacuated to the area. Catholic Charities officials told us that following September 11, CAN helped ease the transition between the Red Cross’s initial case management services and longer-term services provided by other organizations. In addition, an official from the local VOAD said using CAN is a best practice for the sector. The three voluntary organizations that provide mass care services have taken steps to improve their supply chains by coordinating more with each other and FEMA to prevent the breakdown in logistics that had occurred during Hurricane Katrina, according to officials we spoke with. In responding to Hurricane Katrina, the Red Cross, FEMA, and others experienced difficulties determining what resources were needed, what was available, and where resources were at any point in time, as we and others reported. Since then, the Red Cross and FEMA’s logistics department have communicated and coordinated more on mass care capacity, such as the inventory and deployment of cots, blankets, and volunteers, according to national office Red Cross officials. The Red Cross also said the logistics departments of the Red Cross and FEMA meet regularly and that the two organizations are working on a formal agreement and systematically reviewing certain areas, such as sharing information on supplies and warehousing. In addition to the Red Cross, the Southern Baptist Convention and The Salvation Army made changes to improve their supply chain management systems. In Katrina, the Southern Baptist Convention experienced a breakdown in the system that prevented it from replenishing its depleted mobile kitchen stock, according to officials from the organization. While FEMA ultimately helped with supplies, the Southern Baptist Convention has since collaborated with the Red Cross and The Salvation Army to develop a supply chain management system to minimize logistical problems that could interfere with its ability to provide feeding services, according to national office officials from the Southern Baptist Convention. To ensure that disaster staff and volunteers can receive and share information during a disaster, the voluntary organizations in our review told us they had to varying degrees strengthened their communications systems since Katrina. Hurricane Katrina destroyed core communications systems throughout the Gulf Coast, leaving emergency responders and citizens without a reliable network needed for coordination. Since then, to prevent potential loss of communication during disasters, the Red Cross increased the number of its disaster response communications equipment and prepositioned emergency communications response vehicles that had Global Positioning Systems. According to organization officials, the Red Cross prepositioned communications equipment in 51 cities across the country, with special attention to hurricane-prone areas. The Red Cross also provided some communications equipment to the Southern Baptist Convention for its mobile kitchens and trucks. According to Red Cross national office officials, the organization’s long-term goal for communications is to achieve interoperability among different systems such as landline, cellular, and radio networks. Furthermore, the Red Cross reported that it can communicate with FEMA and other federal agencies during a disaster through its participation in the national warning system and its use of a high-frequency radio program also used by federal agencies; in contrast, communication with nonfederal organizations is through liaisons in a facility or by e-mail or telephone. In addition to these Red Cross efforts, the Southern Baptist Convention enabled its ham radio operators throughout the country to directly access its national disaster operations center through a licensed radio address, began including a communications officer in each of its incident command teams, and established a standard communications skill set for all of its local affiliates, among other improvements. Local Salvation Army units also reported upgrading their communications system since Katrina. In Washington, D.C., The Salvation Army began developing an in-house communications system in the event that cellular and satellite communications networks are down, and in Miami, The Salvation Army equipped its canteens with Global Positioning Systems to help disaster relief teams pinpoint locations if street signs are missing due to a disaster. In addition, Catholic Charities in Miami purchased new communications trailers with portable laptop computer stations, Internet access, a generator, and satellite access, according to a Catholic Charities official. Although initial assessments do not yet fully capture the collective capabilities of major voluntary organizations, the evidence suggests that without government and other assistance, a worst-case large-scale disaster would overwhelm voluntary organizations’ current mass care capabilities in the metropolitan areas we visited. The federal government and voluntary organizations have started to identify sheltering and feeding capabilities. However, at this point most existing assessments are locally or regionally based and do not provide a full picture of the nationwide capabilities of these organizations that could augment local capabilities. Furthermore, attempts to develop comprehensive assessments are hindered by the lack of standard terms and measures in the field of mass care. In the four metro areas we visited, the American Red Cross, The Salvation Army, and the Southern Baptist Convention were able to provide information on their local sheltering and feeding resources, and in large- scale disasters their substantial nationwide resources could be brought to bear in an affected area. Nevertheless, the estimated need for sheltering and feeding in a worst-case large-scale disaster—-such as a Katrina-level event—would overwhelm these voluntary organizations. We also found, however, that many local and state governments in the areas we visited, as well as the federal government, are planning to use government employees and private sector resources to help address such extensive needs. Red Cross and FEMA officials also told us that in a catastrophic situation, assistance will likely be provided from many sources, including the general public, as well as the private and nonprofit sectors, that is not part of any prepared or planned response. Because the assessment of capabilities among multiple organizations nationwide is an emerging effort—largely post-Katrina—it does not yet allow for a systematic understanding of the mass care capabilities that voluntary organizations can bring to bear to address large-scale disasters in the four metropolitan areas in our review. Assessments help organizations identify the resources and capabilities they have as well as potential gaps. To assess capabilities in such disasters in any metro area, it is necessary to have information not only on an organization’s local capabilities but also its regional and nationwide capabilities. Under this scalable approach—which is a cornerstone of the Framework and the Catastrophic Supplement as well—local voluntary organizations generally ramp up their capabilities to respond to large-scale disasters, a process that is shown in figure 4. Voluntary organizations are generally able to handle smaller disasters using locally or regionally based capabilities, but in a large-scale disaster their nationwide capabilities can be brought to bear in an affected area. While our focus in this review is on voluntary organizations’ resources and capabilities, governments at all levels also play a role in addressing mass care needs in large-scale disasters. In anticipation of potential disasters, the federal government and the Red Cross have separately started to assess sheltering and feeding capabilities, but these assessments involve data with different purposes, geographic scope, and disaster scenarios. Consequently they do not yet generate detailed information for a comprehensive picture of the capabilities of the voluntary organizations in our review. (See table 5.) FEMA is currently spearheading two initiatives that to some extent address the mass care capabilities of voluntary organizations in our review. FEMA’s Gap Analysis Program, which has so far looked at state capabilities in 21 hurricane-prone states and territories, has begun to take stock of some voluntary organizations’ capabilities. According to FEMA officials, states incorporated sheltering data from organizations with which they have formal agreements. In the four metro areas we visited, however, we found that—unlike the Red Cross—The Salvation Army and the Southern Baptist Convention did not generally have formal agreements with the state or local government. For this reason, it is unlikely that their resources have been included in this first phase, according to FEMA officials. Also, this initial phase of analysis did not assess feeding capabilities outside of those available in shelters, a key facet of mass care for which voluntary organizations have significant resources. Another form of assessment under way through FEMA and the Red Cross—the National Shelter System database—which collects information on shelter facilities and capacities nationwide—largely consists of shelters operated by the Red Cross, and states have recently entered new data on non-Red Cross shelters as well. While The Salvation Army and other voluntary spokesmen told us they have shelters at recreation centers and other sites that are not listed in this database, FEMA officials told us the accuracy of the shelter data is contingent upon states reporting information into the system and updating it frequently. FEMA has offered to have its staff help states include non-Red Cross shelter data in the database and has also provided or facilitated National Shelter System training in 26 states and 3 territories. As of July 2008, shelters operated by the Red Cross account for about 90 percent of the shelters listed, and according to FEMA officials, 47 states and 3 territories have entered non-Red Cross shelter data into the database. In commenting on the draft report, FEMA noted that in addition to these assessments, the agency is conducting catastrophic planning efforts to help some states develop sheltering plans for responding to certain disaster scenarios. For example, the states involved in planning efforts for the New Madrid earthquake are developing plans to protect and assist their impacted populations and identifying ways to augment the resources provided by voluntary organizations and the federal government. Of the voluntary organizations in our review, the Red Cross is the only one that has, to date, undertaken self-assessments of its capabilities. First, its annual readiness assessments of individual local chapters provide an overview of locally based capabilities for disasters of various scales and identify shortfalls in equipment and personnel for each chapter. Second, the Red Cross has also conducted comprehensive assessments of its sheltering and feeding capabilities in six high-risk areas of the country as part of its capacity-building initiative for those areas. Focusing on the most likely worst-case catastrophic disaster scenario for each area, this initiative reflects the Red Cross’s primary means of addressing its responsibilities under the federal Catastrophic Supplement. Red Cross officials said that while they incorporated data from The Salvation Army and the Southern Baptist Convention into this assessment, many of their other partner organizations were unable to provide the Red Cross with such information. The Salvation Army and Southern Baptist Convention officials with whom we spoke said they have not yet assessed their organizations’ nationwide feeding capabilities, although they were able to provide us with data on the total number of mobile kitchens and other types of equipment they have across the country. Also underlying the problem of limited data on voluntary organizations is the lack of standard terminology and measures for characterizing mass care resources. For example, voluntary organizations do not uniformly use standard classifications for their mobile kitchens. This makes it difficult to quickly assess total capacity when dozens of mobile kitchens from different organizations arrive at a disaster site or when trying to assess capabilities. While DHS requires all federal departments and agencies to adopt standard descriptions and measures—a process defined in NIMS as resource typing—voluntary organizations are not generally required to inventory their assets according to these standards. Red Cross officials report that their organization does follow these standards, but The Salvation Army and Southern Baptist Convention officials said their organizations currently do not, although the latter has taken steps to do so. Specifically, national Southern Baptist officials said they are working with the Red Cross and The Salvation Army to standardize their mobile kitchen classifications using NIMS resource definitions. We also found indications of change at the local level in California with regard to The Salvation Army. Officials there told us they used NIMS resource typing to categorize the organization’s mobile kitchens in the state and that they have provided these data to California state officials. Meanwhile, FEMA is also working with NVOAD to standardize more ESF-6 service terms, in accordance with its responsibilities under the Post- Katrina Reform Act. This initiative currently includes terms and definitions for some mass care services such as shelter management and mobile kitchens. However, FEMA officials said it may be several years before additional standard terms and measures are fully integrated into disaster operations. Although systematic assessments of mass care capabilities are limited, it is evident that in large-scale, especially worst-case, catastrophic disasters, the three mass care voluntary organizations would not likely be able to fulfill the need for sheltering and feeding in the four metropolitan areas in our review without government and other assistance, according to voluntary organization officials we interviewed as well as our review of federal and other data. Red Cross officials, as well as some officials from other organizations we visited, generally agreed that they do not have sufficient capabilities to single-handedly meet all of the potential sheltering and feeding needs in some catastrophic disasters. While the mass care resources of these voluntary organizations are substantial, both locally and nationally, our analysis indicates a likely shortage of both personnel and assets. Anticipating such shortages, the voluntary organizations we spoke with are making efforts to train additional personnel. According to local, state, and federal government officials we spoke with, government agencies—which play key roles in disaster response—told us that they were planning to use government employees and private sector resources in such disasters in addition to the resources of voluntary organizations. Red Cross and FEMA officials also told us that in a catastrophic situation, assistance will likely be provided from many sources, including the general public, as well as the private and nonprofit sectors, that are not part of any prepared or planned response. Within the past few years, DHS, the Red Cross, and others have developed estimates of the magnitude of mass care services that might be needed to respond to worst-case catastrophic disasters, such as various kinds of terrorist attacks or a hurricane on the scale of Katrina or greater. The estimates vary according to the type, magnitude, and location of such disasters and are necessarily characterized by uncertainties. (See table 6.) Although sheltering resources are substantial, in a worst-case large-scale disaster, the need for sheltering would likely exceed voluntary organizations’ current sheltering capabilities in most metro areas in our study, according to government and Red Cross estimates of needs. The preponderance of shelters for which data are available are operated by the Red Cross in schools, churches, community centers, and other facilities that meet structural standards, but The Salvation Army and other organizations also operate a small number of sheltering facilities as well. The Red Cross does not own these shelter facilities, but it either manages the shelters with its own personnel and supplies under agreement with the owners or works with its partner organizations and others to help them manage shelters. At the national level, the Red Cross has identified 50,000 potential shelter facilities across the country, as noted in the National Shelter System database. In addition, the Red Cross has enough sheltering supplies, such as cots and blankets, to support up to 500,000 people in shelters nationwide. However, while disaster victims can be evacuated to shelters across the country if necessary, as happened after Katrina, Red Cross officials told us they prefer to shelter people locally. In the four metro areas we visited, the Red Cross has identified shelter facilities and their maximum or potential capacities, as shown in table 7. Despite local and nationally available resources, the kinds of large-scale disasters for which estimates of need exist would greatly tax and exceed the Red Cross’s ability to provide sheltering. For example, for a major earthquake in a metropolitan area, DHS estimates that 313,000 people would need shelter, but in Los Angeles—a city prone to earthquakes— Red Cross officials told us they are capable of sheltering 84,000 people locally under optimal conditions. The Red Cross’s own analyses of other types of worst-case disaster scenarios also identified shortages in sheltering capacity in New York and Washington, D.C., as well. For example, for a nuclear terrorist attack in Washington, D.C., the Red Cross estimates that 150,000 people would need sheltering in the National Capital Region and identified a gap of over 100,000 shelter spaces after accounting for existing capabilities. The ability to build or strengthen sheltering capabilities depends on several elements, including the availability of trained personnel and supplies, the condition of shelter facilities, and the particular disaster scenario and location, among other things. Chief among these constraints, according to national and local Red Cross officials, is the shortage of trained volunteers. Red Cross officials said there are 17,000 volunteers and staff in the Red Cross’s national disaster services human resources program that have received extensive training in sheltering as of May 2008 and an additional 16,000 Red Cross workers trained in mass care that can be deployed across the country. However, local chapters are still expected to be self-sufficient for up to 5 days after a large-scale disaster occurs, while staff and volunteers are being mobilized nationwide. According to the Red Cross’s annual chapter assessments, personnel shortages limit the ability of all four chapters we visited to manage the local response beyond certain levels. In New York City, Red Cross officials noted that it has identified enough shelters to optimally accommodate more than 300,000 people, but that it has only enough personnel locally to simultaneously operate 25 shelters, for a total sheltering capability of 12,500 people. The Red Cross is working with its local chapters to develop action plans to address personnel shortages. For example, in New York, the Red Cross has set a goal of recruiting 10,000 additional volunteers—in addition to the 2,000 it had as of December 2007 to operate shelters—and plans to attract 850 new volunteers each quarter. In addition, supply chain and warehousing challenges affect the ability to maximize sheltering capabilities. According to Red Cross officials, it is not necessary to maintain large inventories of some supplies, such as blankets, if they can be quickly and easily purchased. However, obtaining other supplies such as cots requires a long lead time since they may need to be shipped from as far away as China, a fact that can be particularly problematic in no- notice events such as major earthquakes. While purchasing supplies as needed can reduce warehousing costs, this approach can also be affected by potential disruptions in the global supply chain, according to officials we spoke with. In DHS’s Catastrophic Incident Supplement, an underlying assumption is that substantial numbers of trained mass care specialists and managers will be required for an extended period of time to sustain mass care sheltering and feeding activities after a catastrophic disaster. In recognition of the need to increase the number of trained personnel to staff existing shelters, state and local governments in the four metropolitan areas we visited told us they are planning to train and use government employees to staff shelters in such large-scale disasters. For example, in New York City, the Office of Emergency Management is preparing to use trained city government employees and supplies to provide basic sheltering care for up to 600,000 residents in evacuation shelters. The city-run evacuation shelters would be located at schools for the first few days before and after a catastrophic hurricane. After this initial emergency plan is implemented, the city expects the Red Cross to step in and provide more comprehensive sheltering services to people who cannot return to their homes. As Red Cross officials told us, the New York City government is the only local organization with the potential manpower to staff all the available shelters, but the Red Cross will also provide additional personnel to help operate some of the city’s evacuation shelters and special medical needs shelters. As of November 2007, 22,000 New York City employees had received shelter training through a local university, with some additional training from the Red Cross. Similarly, in Los Angeles, as of January 2008, approximately 1,400 county employees had been trained in shelter management so far, and the Red Cross has set a goal to train 60,000 of the county’s 90,000 employees. In addition, state governments have resources, equipment, and trained personnel that can be mobilized to provide mass care, according to state and FEMA officials. States can also request additional resources from neighboring states through their mutual aid agreements. According to Red Cross and FEMA officials, in a catastrophic disaster, sheltering assistance would likely be provided from many sources, such as churches and other community organizations, as occurred in the aftermath of the Katrina hurricanes, and they also noted that such assistance was not part of any prepared or planned response. Although voluntary organizations’ feeding resources are also substantial, the feeding needs in a worst-case large-scale disaster would likely exceed the voluntary organizations’ current feeding capabilities for most metro areas in our review, according to government and Red Cross estimates of needs. In their feeding operations, voluntary organizations make use of mobile kitchens or canteens to offer hot meals and sandwiches, prepackaged meals known as meals-ready-to-eat (MRE), and hot and cold meals prepared by contracted private vendors. The Red Cross, The Salvation Army, and the Southern Baptist Convention have locally based resources for feeding disaster victims in the four metro areas we visited. For example, The Salvation Army and the Southern Baptist Convention have mobile kitchens stationed in close proximity to each of the four metro areas we visited. Some of these mobile kitchens are capable of producing up to 25,000 meals per day. The Red Cross also has feeding resources in these metro areas including prepackaged meals, vehicles equipped to deliver food, and contracts with local vendors to prepare meals. In addition, by mobilizing nationwide resources, such as mobile kitchens and prepackaged meals, the Red Cross reports that it currently has the capability, together with the Southern Baptist Convention, to provide about 1 million meals per day—about the maximum number of meals served per day during Katrina. Across the country, The Salvation Army has 697 mobile kitchens and other specialized vehicles and the Southern Baptist Convention has 117 mobile kitchens that can be dispatched to disaster sites, according to organization officials. Furthermore, Red Cross officials also said they have 6 million prepackaged meals stockpiled in warehouses across the country that can be quickly distributed in the first few days after a disaster, before mobile kitchens are fully deployed to the affected area. Red Cross officials also said that they can tap into additional food sources, such as catering contracts with food service providers, during prolonged response efforts. Despite these substantial resources nationwide, in a worst-case large-scale disaster, feeding needs would still greatly exceed the current capabilities of these voluntary organizations, according to government and Red Cross estimates of needs under different scenarios. For example, DHS estimates that feeding victims of a major earthquake would require approximately 1.5 million meals per day, but this need is considerably greater than the 1 million meals per day currently possible, leaving a shortfall of about 500,000 meals per day. According to state government estimates, the gap is even larger for other types of disaster scenarios. For example, according to Florida state estimates, a category IV hurricane could produce the need for 3 million meals per day, which is considerably greater than the 1 million meals per day that the Red Cross can provide. In addition, a nuclear terrorist attack in Washington, D.C., would require 300,000 meals per day more than the Red Cross’s current capabilities allow, according to the Red Cross’s internal assessments. The ability to build or strengthen feeding capabilities depends on the availability of trained personnel, equipment, and supplies. As with sheltering, some voluntary organization officials told us that the key constraint is the limited availability of trained personnel. Feeding services are a labor-intensive process. For example, Southern Baptist Convention officials said it takes a team of 50 trained people to operate a large mobile kitchen, and an additional 50 people are needed every 4 days because teams are rotated in and out of disaster sites. Southern Baptist Convention officials said that although they have 75,000 trained volunteers in their organization, there are still not enough trained volunteers, especially experienced team leaders. They said the shortage of experienced team leaders is particularly challenging because mobile kitchens cannot be deployed without a team leader. The voluntary organizations are addressing these personnel shortages by promoting training programs for new staff and volunteers and also utilizing additional unaffiliated, untrained volunteers who join during response efforts. For example, according to The Salvation Army, its national disaster training program has trained more than 16,000 personnel throughout the United States since 2005. In addition, supply disruptions are also a major concern in large- scale disasters because mobile kitchens and other feeding units need to be restocked with food and supplies in order to continue providing meals. Red Cross officials told us they are in the process of expanding their food supply by contracting with national vendors to provide additional meals during disasters. In addition, as previously mentioned, the Southern Baptist Convention faced problems resupplying its mobile kitchens during the response to Hurricane Katrina and has since taken steps to develop a supply chain management system with the Red Cross and The Salvation Army to minimize future logistical problems. In the four metro areas we visited, some state and local government officials we met with told us they are planning to fill these gaps in feeding services by contracting with private sector providers. In Florida, the state is planning to use private sector contractors to fill gaps in feeding services in preparation for a catastrophic hurricane. A Florida state official said obtaining and distributing the estimated 3 million meals per day that would be needed is a huge logistical challenge that would require the state to use 20 to 40 private vendors. In Washington, D.C., the emergency management officials said they are also establishing open contracts with private sector providers for additional prepackaged meals and other food supplies. As a result of FEMA’s new responsibilities under the Post-Katrina Act and its new role as the primary agency for mass care under the National Framework, FEMA officials have told us that the agency was working to identify additional resources for situations in which the mass care capabilities of government and voluntary organizations are exceeded. FEMA officials said that FEMA has developed contracts with private companies for mass care resources for situations in which the needs exceed federal capabilities. After Katrina, FEMA made four noncompetitive awards to companies for housing services. Since then, contracts for housing services have been let through a competitive process and broadened in scope so that if a disaster struck now they could also include facility assessment for shelters, facility rehabilitation—including making facilities accessible—feeding, security, and staffing shelters. According to the FEMA official in charge of these contracts, the contracts gave the federal government the option of purchasing the resources it needs in response to disasters. FEMA officials said, however, that they prefer using federal resources whenever possible because private sector contract services are more expensive than federal resources. FEMA also has a mass care unit that is responsible for coordinating ESF-6 partner agency activities and assessing state and local government shelter shortfalls. According to FEMA, the members of the mass care unit based in Washington, D.C., are composed of subject matter experts trained in various mass care operations, including sheltering. Mass care teams have been deployed to assist with sheltering operations, such as the California wildfires of 2007 and the Iowa floods of 2008. FEMA regional offices have also begun to hire staff dedicated to mass care. Shortages in trained personnel, identifying and dedicating financial resources for preparedness activities, and strengthening connections with government agencies continue to challenge the voluntary organizations in our study. Voluntary organizations in our review continue to face shortages in trained staff to work on preparing for future disasters, among other things, and volunteers to help provide mass care services, even though voluntary organizations and government agencies we met with made efforts to train additional personnel. Identifying and dedicating financial resources for disaster planning and preparedness become increasingly difficult as voluntary organizations also strive to meet competing demands. In addition, the level of involvement and interaction of voluntary organizations in disaster planning and coordination with government agencies is an ongoing challenge, even for the American Red Cross, which has recently changed the way it works with FEMA and state governments. The most commonly cited concern that voluntary organizations have about their capabilities is the shortage of trained staff or volunteers, particularly for disaster planning and preparedness, according to voluntary organization officials. State and local governments are primarily responsible for preparing their communities to manage disasters locally— through planning and coordination with other government agencies, voluntary organizations, and the private sector. However, voluntary organization officials we met with told us it was difficult for them to devote staff to disaster planning, preparedness activities, and coordination. At the national level, the Southern Baptist Convention and Catholic Charities USA maintained small staffs of one or two people that work on disaster preparedness and coordination, which they said made preparedness and coordination for large-scale disasters challenging. At the local level, we also heard that staff who were responsible for disaster planning for their organization had multiple roles and responsibilities, including coordinating with others involved in disaster response as well as daily responsibilities in other areas. This was particularly an issue for the faith-based organizations, such as The Salvation Army and the Southern Baptist Convention, for whom disaster response, while important, is generally ancillary to their primary mission. For example, in Florida the state Southern Baptist Convention has a designated staff member solely focused on disaster relief and recovery, but other state Southern Baptist Conventions expect disaster staff to split their time among other responsibilities, such as managing the men’s ministry, and generally do not have the time or ability to interact with the state emergency management agency, according to an official from the Florida Southern Baptist Convention. Similarly, a Salvation Army official in Miami commented that The Salvation Army could do more if they had a dedicated liaison employee to help with their local government responsibilities, including coordinating the provision of mass care services, which the organization provides in agreement with the local government. According to a national official from Catholic Charities USA, local Catholic Charities that provide disaster services usually have one employee to handle the disaster training and response operation, in addition to other responsibilities. While it would be ideal for all local Catholic Charities to have at least two or three employees trained in disaster response, she said, the organization currently does not have resources for this training. In New York and Los Angeles, officials from Catholic Charities confirmed that the lack of personnel capable of responding to disasters is an ongoing challenge for their organization. These shortages in trained staff affected the ability of some local voluntary organizations and VOADs we met with to develop and update business continuity and disaster response plans, according to officials from these organizations. In Los Angeles, an official from Catholic Charities told us that it does not have a disaster or continuity-of-operations plan tailored to the organization’s needs, because it does not have dedicated disaster staff to develop such plans. Voluntary organization officials in Miami emphasized the importance of having such continuity plans, because after Hurricanes Katrina and Wilma struck Florida in 2005, most of the local voluntary organizations in the area were unable to provide services due to damage from the storm. In addition, organizations and VOADs we visited said that they struggle to update their disaster response plans. For instance, in Los Angeles, an official from the local VOAD told us that the organization’s disaster response plan needed to be updated, but that the VOAD has not addressed this need because of staffing limitations. This official also told us the VOAD was planning to hire two full-time staff sometime in 2008 using federal pandemic influenza funds received through the county public health department. In addition, as mentioned earlier, voluntary organization officials both nationally and locally told us that they face a shortage of trained volunteers, which limits their ability to provide sheltering and feeding in large-scale, and especially catastrophic disasters. This continues to be an ongoing concern despite the efforts of voluntary organizations and government agencies to build a cadre of trained personnel. Identifying and dedicating funding for disaster preparedness is a challenge for voluntary organizations in light of competing priorities, such as meeting the immediate needs of disaster survivors. Officials from voluntary organizations in our review told us that they typically raised funds immediately following a disaster to directly provide services, rather than for disaster preparedness—or, for that matter, longer-term recovery efforts. Although the Red Cross raised more than $2 billion to shelter, feed, and provide aid to disaster survivors following Katrina, the Red Cross recently acknowledged that it is less realistic to expect public donations to fund its nationwide disaster capacity-building initiatives. Similarly, the biggest challenge for Catholic Charities USA is identifying funds for essential disaster training—a key aspect of preparedness, according to an official. At the local level, an official from Catholic Charities in New York noted also that incoming donations tend to focus on funding the initial disaster response. As we previously reported, vague language and narrowly focused definitions used by some voluntary organizations in their appeal for public donations following the September 11 attacks contributed to debates over how funds should be distributed, particularly between providing immediate cash assistance to survivors or services to meet short- and long-term needs. An indication of this continuing challenge is that officials from Catholic Charities in Washington, D.C., and New York reported that they are still working with September 11 disaster victims and communities, and that they struggle to raise funds for long- term recovery work in general. Besides public donations, while federal grant programs could provide another potential source of preparedness funding for voluntary organizations, local voluntary organization officials told us it was difficult to secure funding through these programs without support from the local government. Local voluntary organizations officials we met with said that federal funding for disaster preparedness, such as the Urban Area Security Initiative Grant Program, could be useful in helping their organization strengthen their capabilities. For example, such grants could be used to coordinate preparedness activities with FEMA and other disaster responders, better enable voluntary organizations to develop continuity of operations plans, and train staff and volunteers. However, although voluntary organizations are among those that play a role in the National Response Framework—especially in relation to ESF-6—these organizations received little to no federal funding through programs such as the Homeland Security Grant Programs, according to some local voluntary organization and VOAD officials we visited. Under most of these grants, states or local governments are the grant recipients, and other organizations such as police and fire departments can receive funds through the state or local governments. Of the local voluntary organizations and VOADs in our study, two Red Cross chapters received DHS funding in recent years, according to the Red Cross. In Los Angeles, Red Cross officials told us that the chapter had to be sponsored and supported by the local government in order to receive DHS funding for shelter equipment and supplies. While the director of FEMA’s grant office told us that FEMA considered voluntary organizations as among the eligible subgrantees for several preparedness grants under the Homeland Security Grant Program, the grant guidance does not state this explicitly. According to fiscal year 2008 grant guidance, a state-designated administrating agency is the only entity eligible to formally apply for these DHS funds. The state agency is required to obligate funds to local units of government and other designated recipients, but the grant guidance does not define what it means by “other designated recipient.” In addition, FEMA strongly encourages the timely obligation of funds from local units of government to other subgrantees, as appropriate, but possible subgrantees are not identified. State agencies have considerable latitude in determining how to spend funds received through the grant program and which organizations to provide funds to, according to the FEMA grant director. However, for fiscal year 2005, approximately two-thirds of Homeland Security Grant Program funds were dedicated to equipment—such as personal protective gear, chemical and biological detection kits, and satellite phones—according to DHS, while 18 percent were dedicated to planning activities. An official from FEMA’s grants office told us that following the September 11 attacks, the grant program focused on prevention and protection from terrorism incidents, but it has evolved since Katrina. According to this official, the fiscal year 2008 grant guidance encourages states to work with voluntary organizations, particularly for evacuations and catastrophic preparedness. Furthermore, this official said it is possible that DHS grant funding has not yet trickled down to local voluntary organizations. It is possible that the tendency of DHS funding programs to focus on equipment for prevention and protection rather than on preparedness and planning activities could also shift as states and localities put equipment and systems into place and turn to other aspects of preparedness. Local VOADs can play a key role in disaster preparation and response through interactions with local emergency management agencies of local governments, although the local VOADS in the areas we visited varied in their ability and approach to working with local governments on disasters. Like NVOAD, local VOADs are not service providers. Instead, like NVOAD nationally, local VOADs play an important role in coordinating response and facilitating relationship building in the voluntary sector at the local level, according to government officials. Generally, most of the voluntary organizations in the locations we visited were members of their local VOADs. Several local government emergency managers told us they relied on the local VOADs as a focal point to help them coordinate with many voluntary organizations during disasters. Some local VOADs in our review met regularly and were closely connected to the local governmental emergency management agency—including having seats at the local emergency operations centers. More specifically, the Red Cross was a member of the local VOADs in the areas we visited. It also directly coordinated with government agencies during a disaster and had a seat at the local emergency operations center in all four locations. In New York and Miami, The Salvation Army units were VOAD members and had seats as well. Other VOADs were less active and experienced and were not as closely linked to governmental response. In Washington, D.C., the local VOAD has struggled to maintain a network and continually convene since its inception, according to the current VOAD Chair. In Miami, a local VOAD member told us that the VOAD had little experience with large-scale disasters, because it re-formed after Hurricane Katrina and the area has not experienced major hurricanes since then. In addition, one of the local VOADs was tied to a local ESF-6 mass care operating unit, while others were more closely connected to an emergency function that managed unaffiliated volunteers and donations. The local VOAD in Los Angeles worked with the local government on ESF-6, issues while the VOADs in Miami and Washington, D.C., coordinated with government agencies through managing volunteers and donations during disasters. Currently, NVOAD has few resources to support state and local VOADs. NVOAD’s executive director told us that NVOAD plans to provide state and local VOADS with more support using Web-based tools and guidance, but these plans are hindered by a lack of funding to implement them. As we recently reported, NVOAD is limited in its ability to support its national voluntary organization members, and also lacks the staff or resources to support its affiliated state and local VOADs. Because of these limitations, we recommended that NVOAD assess members’ information needs, improve its communication strategies after disasters, and consider strategies for increasing staff support after disasters. NVOAD agreed with this recommendation and reported that the organization is looking to develop communications systems that take better advantage of current technologies. Since our previous report was issued, NVOAD has expanded its staff from two to four members, some of whom are working to build the collective capacity of state and local VOADs and providing training and technical assistance to state VOADs. At the federal level, although FEMA plays a central role in coordinating with voluntary organizations on mass care and other human services, its difficulties in coordinating activities with the voluntary sector due to staffing limitations were also noted in this earlier report. At the time of our report, FEMA only had one full-time employee in each FEMA region—a voluntary agency liaison—to coordinate activities between voluntary organizations and FEMA, and FEMA liaisons did not have training to assist them in fully preparing for their duties. In light of FEMA’s responsibilities for coordinating the activities of voluntary organizations in disasters under the National Framework, we recommended that FEMA take additional actions to enhance the capabilities of FEMA liaisons in order to fulfill this role. FEMA agreed with our recommendation; however, it is too early to assess the impact of any changes to enhance liaisons’ capabilities. Last, because of its current budget deficit, the Red Cross faces new challenges in fulfilling its ESF-6 role as a support agency. The Red Cross noted that it is working closely with its government partners in leadership positions to manage the transition, following its staffing reductions at FEMA’s regional offices and elsewhere and the subsequent realignment of staff responsibilities. The Red Cross reported that it will monitor the impact of these changes and make adjustments as needed. At the same time, as was previously mentioned, the Red Cross has also requested $10 million in federal funding to cover its staffing and other responsibilities under the ESF-6. According to FEMA officials, FEMA funded 10 regional positions to replace the Red Cross mass care planner positions that were terminated. FEMA also said that while it is too early to assess the long- term impact of these Red Cross staffing changes, FEMA was experiencing some hindrance to effective communications and limits on the Red Cross’s participation in planning at FEMA headquarters, regional offices, and field offices. Regarding the Red Cross strategy of relying on shared resources and volunteers instead of full-time dedicated staff in FEMA regional offices, FEMA officials noted that dedicated staff has proven to be a more reliable source for an ongoing relationship and interaction between agencies. They expressed concern that the lack of dedicated staff, frequent rotations, and inconsistent skill level of volunteers—used instead of full- time Red Cross staff—will hamper communications and may impede coordination efforts. These concerns are similar to the difficulties Red Cross ESF-6 staff faced during Katrina, as we noted in a previous review. Because the American Red Cross and other major voluntary organizations play such a vital role in providing mass care services during large-scale disasters, the importance of having a realistic understanding of their capabilities cannot be underestimated. FEMA has taken initial steps by having states assess their own capabilities and gaps in several critical areas and has completed an initial phase of this analysis. However, this broad assessment effort has yet to fully include the sheltering capabilities of many voluntary organizations and has not yet begun to address feeding capabilities outside of shelters. We understand that when a large-scale disaster strikes, some portion of mass care services will be provided by local voluntary organizations that did not specifically plan or prepare to do so, and that their capabilities cannot be assessed in advance. However, without more comprehensive data from voluntary sector organizations that expect to play a role, the federal government will have an incomplete picture of the mass care resources it could draw upon as well as of the gaps that it must be prepared to fill in large-scale and catastrophic disasters. Unless national assessments more fully capture the mass care capabilities of key providers, questions would remain about the nation’s ability to shelter and feed survivors, especially in another disaster on the scale of Katrina. To the extent that local, state, and federal governments rely on voluntary organizations to step in and care for massive numbers of affected people, the challenges these organizations face in preparing for and responding to rare—but potentially catastrophic—disasters are of national concern. Reliant on volunteers and donations, many of the organizations we visited said that federal grant funding could help them better prepare for and build capacity for large-scale disasters, because they struggle to raise private donations for this purpose. Federal grants, while finite, are available to assist in capacity building, and voluntary organizations can be among those who receive federal grant funds from states and localities, according to FEMA officials. However, most of the voluntary organizations in our review have not received such funding, although they told us it would be beneficial. While there are many competing demands and priorities for such funds, clearer grant guidance could at least ensure that those making grant decisions consider voluntary organizations and VOADs as among those able to be subgrantees under these grants. Unless voluntary organizations are able to strengthen their capabilities and address planning and coordination challenges, the nation as a whole will likely be less prepared for providing mass care services during a large- scale disaster. An additional area of concern is the expected role of the Red Cross in a catastrophic disaster of a scale that invokes the federal government’s Catastrophic Incident Supplement. As the experience with responding to Katrina showed, it is important to agree on roles and responsibilities, as well as have a clear understanding of operating procedures in the event of a catastrophic disaster. However, FEMA officials said they have not yet revised or updated the Supplement, as required under the Post-Katrina Reform Act, with the result that the mass care section of the Supplement still reflects Red Cross’s previous role as primary agency for mass care, and not its current role as a support agency under ESF-6. While both FEMA and the Red Cross told us they expected the Red Cross to play a support agency role in a catastrophic event—consistent with the ESF-6— unless this understanding is confirmed in writing and incorporated into federal planning documents for responding to a catastrophic event, the nature of that understanding cannot be transparent to the many parties involved in supporting mass care. Finally, while it is too early to assess the impact of the changes in how the American Red Cross expects to coordinate with FEMA in fulfilling its responsibilities under ESF-6, its capacity to coordinate with FEMA is critical to the nation’s mass care response in large-scale disasters. As a result, the continued implementation, evolution, and effect of these changes bear watching. To help ensure that the Catastrophic Incident Supplement reflects the American Red Cross’s current role under ESF-6 as a support agency for mass care, we recommend that the Secretary of Homeland Security direct the Administrator of FEMA to establish a time frame for updating the mass care section of the Supplement so that it is consistent with the changes in the ESF-6 under the new Framework, and no longer requires the Red Cross to direct federal government resources. In the meantime, FEMA should develop an interim agreement with the Red Cross to document the understanding they have on the Red Cross’s role and responsibilities in a catastrophic event. To more fully capture the disaster capabilities of major voluntary organizations that provide mass care services, we recommend that the Secretary of Homeland Security direct the Administrator of FEMA to take steps to better incorporate these organizations’ capabilities into assessments of mass care capabilities, such as FEMA’s GAP Analysis, and to broaden its assessment to include feeding capabilities outside of shelters. Such steps might include soliciting the input of voluntary organizations, such as through NVOAD; integrating voluntary organization data on capabilities into FEMA’s analyses; and encouraging state governments to include voluntary mass care organization data in studies. To help these voluntary organizations better prepare for providing mass care in major and catastrophic disasters, we recommend that the Secretary of Homeland Security direct the Administrator of FEMA to clarify the Homeland Security Grant Program funding guidance for states so it is clear that voluntary organizations and local VOADs are among those eligible to be subgrantees under the program. We provided a draft of this report to DHS for review and comment. Overall, in its response, FEMA acknowledged the importance of mass care planning among governmental and nongovernmental partners and stated that the report provides a broader understanding of the mass care support capabilities of several national voluntary organizations. FEMA also agreed with our recommendation on establishing a time frame for updating the Catastrophic Incident Supplement and with our recommendation on clarifying Homeland Security Grant Funding guidance. However, FEMA criticized certain aspects of the draft report, asserting, for example, that our methodology did not address the role of states in coordinating mass care. FEMA also disagreed with our recommendation to better incorporate voluntary organizations’ capabilities in assessments. FEMA provided technical clarifications and additional examples of its planning activities, which we have incorporated into the report as appropriate. FEMA’s written comments are provided in appendix III of this report. In its comments, FEMA stated that our understanding of mass care service delivery and the report’s scope and methodology was flawed partly because, in the agency’s view, the draft report did not adequately reflect the role of state governments in disaster response. As stated in our objectives, the primary focus of our report, by intention and design, is on voluntary organizations’ roles and capabilities in disaster response. Our report accordingly discusses selected voluntary organizations’ services and roles, actions in response to past disasters such as Katrina, their mass care capabilities, and the main challenges they face. As noted in our report, the National Response Framework and the Catastrophic Incident Supplement recognize the importance of voluntary organizations’ role in disaster response, particularly with regard to mass care. While focusing on voluntary organizations, our draft report also acknowledges and discusses the disaster response role and responsibilities of governments—local, state, and federal—under the Framework. In line with this, we therefore interviewed local, state, and federal government emergency management officials, as stated in the more detailed description of our report’s methodology. We have added clarifying language as appropriate to our report to make clear our consideration of state governments’ role. In its comments, FEMA also raised concerns about whether the voluntary organizations discussed in our report provided a comprehensive picture of mass care capabilities. However, our report does not attempt to address all the services and capabilities of the voluntary sector. The report acknowledges that other voluntary organizations also provide mass care and other services and includes the caveat that we do not attempt to assess the total disaster response capabilities in any single location we visited. As described in our section on scope and methodology, we selected the five voluntary organizations in our report because of their contributions to the Hurricane Katrina response and congressional interest in these particular organizations. While mass care represents a significant focus—and is the only focus in discussing capabilities—other findings are not limited in this way. For example, we discuss all five organizations’ services and contributions during Katrina. However, for the purpose of discussing mass care capabilities, we specifically focused on the three voluntary organizations in our review that provide mass care— the American Red Cross, The Salvation Army, and the Southern Baptist Convention. We did not include the United Way or Catholic Charities in our finding on capabilities because we agree that they do not provide mass care services. The draft report explicitly states this narrower scope and our reasons for it in key places. FEMA also commented that two tables in the draft report were incomplete. FEMA stated that table 5, on assessments of capabilities, did not include FEMA’s catastrophic planning efforts. Since the table focuses on assessments rather than broader planning initiatives, we have incorporated this information into our report as an example of FEMA’s planning efforts. FEMA also stated that table 6, on estimated mass care needs in worst-case disasters, represents a limited view of mass care capabilities and coordination. However, the table’s purpose is to provide estimates of the magnitude of mass care services that would be needed to respond to worst-case disasters, and not to identify gaps in capabilities or coordination issues, which we address in later sections of our report. We have modified the table’s title to clarify the information it contains. Regarding FEMA’s response to our recommendations, FEMA agreed with our recommendation that it should establish a time frame for updating the mass care section of the Catastrophic Incident Supplement so that it is consistent with the changes in ESF-6 under the new Framework. In its comments, FEMA noted it is in the process of establishing a timeline for updating the Catastrophic Incident Supplement and will change the role of the Red Cross from that of a primary to support agency when the document is updated. We have noted this in the report accordingly. FEMA also agreed with our recommendation to clarify the Homeland Security Grant Program Funding guidance so that it is clear voluntary organizations and VOADs are considered eligible subgrantees. FEMA disagreed with our recommendation that it should take steps to better incorporate the capabilities of voluntary organizations’ into assessments of mass care capabilities. Specifically, FEMA said that federal, state, and local governments cannot command and control private sector resources. We understand the limitations of governmental authority over voluntary organization’s resources; however, as we mention in the draft report, under the Post-Katrina Act, FEMA is required to establish a comprehensive assessment system to assess the nation’s prevention capabilities and overall preparedness, including its operational readiness. While the act does not specifically include voluntary organizations’ capabilities, a comprehensive assessment of the nation’s capabilities should account as fully as possible for voluntary organizations’ capabilities in mass care. Taking steps to assess capabilities more fully does not require controlling these resources but rather cooperatively obtaining and sharing information. This could be done as an extension of FEMA’s ongoing partnerships with voluntary organizations to help strengthen coordination and collaboration in preparation for future disasters. We continue to think that it is important for FEMA to assess the significant capabilities that voluntary organizations can bring to bear in support of governmental efforts to prepare for and respond to disasters. As we noted in our report, without such an assessment, the government will have an incomplete picture of the mass care resources it can draw upon in large- scale disasters. In its comments, FEMA also asserted that our report incorrectly assumes that if funding were made available, it would enable voluntary organizations to shelter and care for people in catastrophic events. However, our report discusses potential federal funding in relation to voluntary organizations’ preparedness and planning activities, not direct services. As noted in the report, such funding could be used to strengthen voluntary organizations’ disaster preparedness, such as coordination with FEMA, training of personnel, and developing continuity-of-operations plans. If FEMA provided clearer guidance to states on voluntary organizations’ potential eligibility for these grants as subgrantees, voluntary organizations might be able to strengthen planning and coordination for future disasters and better utilize their substantial capabilities. A key premise of the National Response Framework is that voluntary organizations play a vital role in times of need, and recent experience has demonstrated that voluntary organizations already play significant roles in providing shelter and care in large-scale disasters. We also provided a copy of our draft report to the American Red Cross and excerpts from the draft report as appropriate to The Salvation Army, the Southern Baptist Convention, Catholic Charities USA, and National Voluntary Organizations Active in Disaster. In its comments, the American Red Cross further explained its role in New York City’s coastal storm plan. We have added this information as appropriate to further clarify the Red Cross’s role under this scenario. In addition, the Red Cross, The Salvation Army, and NVOAD provided us with technical comments, which we have incorporated as appropriate. The American Red Cross’s written comments are provided in appendix IV. We are sending copies of this report to the Secretary of the Department of Homeland Security, the American Red Cross, The Salvation Army, the Southern Baptist Convention, Catholic Charities, the United Way, and NVOAD. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other major contributors to this report are listed in appendix V. We designed our study to provide information on (1) what the roles of major national voluntary organizations are in providing mass care and other human services in response to large-scale disasters requiring federal assistance, (2) what steps these organizations have taken since Katrina to strengthen their capacity for service delivery, (3) what is known about these organizations’ current capabilities for responding to mass care needs in such a large-scale disaster, and (4) what the remaining challenges are that confront voluntary organizations in preparing for such large-scale disasters. We focused our review on the following five major voluntary organizations based on their contributions during Hurricane Katrina and congressional interest: the American Red Cross, The Salvation Army, the Southern Baptist Convention, Catholic Charities USA, and the United Way of America. Since the United Way of America does not provide direct services in disasters, we did not include it in our analysis of recent improvements to service delivery, response capabilities, and remaining challenges. For our review of voluntary organizations’ response capabilities, we limited our focus to the three organizations in our study that provide mass care services: the Red Cross, The Salvation Army, and the Southern Baptist Convention. To obtain information for all of the objectives, we used several methodologies: we reviewed federal and voluntary organization documents; reviewed relevant laws; interviewed local, state, and federal government and voluntary agency officials; conducted site visits to four selected metropolitan areas; and collected data on the voluntary organizations’ capabilities. We reviewed governmental and voluntary organization documents to obtain information on the role of voluntary organizations, recent improvements to service delivery, response capabilities, and remaining challenges. To obtain an understanding of the federal disaster management framework, we reviewed key documents, such as the 2008 National Response Framework, the Emergency Support Function 6—Mass Care, Emergency Assistance, Housing, and Human Services Annex (ESF- 6), the 2006 Catastrophic Incident Supplement, and the 2007 National Preparedness Guidelines, which collectively describe the federal coordination of mass care and other human services. We also reviewed pertinent laws, including the Post-Katrina Emergency Management Reform Act of October 2006. In addition, we reviewed documents for each of the five voluntary organizations in our review, which describe their roles in disasters and explained their organizational response structures. These documents included mission statements, disaster response plans, and statements of understanding with government agencies and other voluntary organizations. We also reviewed key reports written by federal agencies, Congress, voluntary organizations, policy institutes, and GAO to identify lessons learned from the response to Hurricane Katrina and steps voluntary organizations have taken since then to improve service delivery. We interviewed federal government and national voluntary organization officials to obtain information on the role of voluntary organizations, recent improvements to service delivery, response capabilities, and remaining challenges. At the federal level, we interviewed officials from the Federal Emergency Management Agency (FEMA) in the ESF-6 Mass Care Unit, the FEMA Grants Office, and the Disaster Operations Directorate. We also interviewed the executive director of the National Voluntary Organizations Active in Disaster (NVOAD). We interviewed these officials regarding the role of the voluntary organizations in disaster response, grants and funding offered to voluntary organizations, voluntary organization and government logistics in disasters, assessments of capabilities, and the types of interactions each of them has with the organizations from our review. We also interviewed national voluntary organization officials from the five organizations in our review about the roles of their organizations in disaster response, improvements the organizations had made to coordination and service delivery since Hurricane Katrina, their organizations’ capabilities to respond to disasters, and what remaining challenges exist for the organizations in disaster response. We visited four metropolitan areas—Washington, D.C.; New York, New York; Miami, Florida; and Los Angeles, California—to review the roles, response structures, improvements to service delivery, response capabilities, and challenges that remain for the selected voluntary organizations’ in these local areas. We selected these metropolitan areas based on their recent experiences with disaster, such as September 11; their potential risk for large-scale disasters; and the size of their allotments through the federal Urban Areas Security Initiative grant program. The metropolitan areas that we selected also represent four of the six urban areas of the country considered most at risk for terrorism under the 2007 Urban Areas Security Initiative. During our visits to the four metropolitan areas, we interviewed officials from the five voluntary organizations, local and state government emergency management agency officials, the heads of the local Voluntary Organizations Active in Disaster (VOAD), and FEMA’s regionally based liaisons to the voluntary sector, known as voluntary agency liaisons (VAL). During our interviews, we asked about the roles and response structures of voluntary organizations in disaster response, improvements the organizations had made to coordination and service delivery since Hurricane Katrina, the organizations’ capabilities to respond to disasters, and what challenges exist for the organizations in disaster response. To review voluntary organizations’ sheltering and feeding capabilities, we collected data through interviews and written responses from the three organizations in our study that provide mass care: the Red Cross, The Salvation Army, and the Southern Baptist Convention. By capabilities we mean the means to accomplish a mission or function under specified conditions to target levels of performance, as defined in the federal government’s National Preparedness Guidelines. We collected data on both their nationwide capabilities and their locally based capabilities in each of the four metropolitan areas we visited. To obtain capabilities data in a uniform manner, we requested written responses to questions about sheltering and feeding capabilities from these organizations in the localities we visited, and in many of these responses, voluntary organizations described how they derived their data. For example, to collect data on feeding capabilities, we asked voluntary organization officials how many mobile kitchens they have and how many meals per day they are capable of providing. To assess the reliability of the capability data provided by the voluntary organizations, we reviewed relevant documents and interviewed officials knowledgeable about the data. However, we did not directly test the reliability of these data because the gaps between capabilities and estimated needs were so large that greater precision would not change this underlying finding. It was also not within the scope of our work to review the voluntary organizations’ systems of internal controls for data on their resources and capabilities. To identify potential needs for mass care services, we used available estimates for catastrophic disaster scenarios in each of the selected metropolitan areas: Washington, D.C.—terrorism; New York, New York— hurricane; Miami, Florida—hurricane; and Los Angeles, California— earthquake. We reviewed federal, state, and Red Cross estimates of sheltering and feeding needs resulting from these potential catastrophic disasters: Federal catastrophic estimates—We reviewed the earthquake estimates from the Target Capabilities List that were developed by the Department of Homeland Security (DHS) after an in-depth analysis of the Major Earthquake scenario in the National Planning Scenarios. The National Planning Scenarios were developed by the Homeland Security Council–-in partnership with the Department of Homeland Security, other federal departments and agencies, and state and local homeland security agencies. The scenario assumes a 7.2 magnitude earthquake with a subsequent 8.0 earthquake occurs along a fault zone in a major metropolitan area with a population of approximately 10 million people, which is approximately the population of Los Angeles County. State catastrophic estimates—We reviewed catastrophic hurricane estimates from the Florida Division of Emergency Management’s Hurricane Ono planning project. The project assumes a Category V hurricane making landfall in South Florida, which has a population of nearly 7 million people. Red Cross catastrophic estimates—We reviewed catastrophic estimates from the Red Cross’s risk-based capacity building initiative. To develop these estimates, the Red Cross worked with state and local officials and other disaster experts to develop “worst case” disaster scenarios in six high-risk areas of the country, including the four metropolitan areas in our study. The scenarios for these four metropolitan areas were: a 7.2 to 7.5 magnitude earthquake in Southern California; a chemical, biological, radiological, nuclear, or major explosion terrorist attack in the Washington, D.C. region; a Category III/IV hurricane in the New York metropolitan area; and a Category V hurricane in the Gulf Coast. To identify general findings about nationwide preparedness, we compared the capabilities data provided by the voluntary organizations to these catastrophic disaster estimates. We did not attempt to assess the total disaster response capabilities in any single location that we visited or the efficacy of any responses to particular scenarios, such as major earthquakes versus hurricanes. We conducted this performance audit from August 2007 to September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Adventist Community Services America’s Second Harvest American Baptist Men/USA American Radio Relay League, Inc. (ARRL) American Red Cross Ananda Marga Universal Relief Team (AMURT) Catholic Charities USA Christian Disaster Response Christian Reformed World Relief Committee (CRWRC) Church of the Brethren—Brethren Disaster Ministries Church of Scientology Disaster Response Church World Service City Team Ministries Convoy of Hope Disaster Psychiatry Outreach Episcopal Relief and Development Feed the Children Friends Disaster Service, Inc. Habitat for Humanity HOPE Coalition America Humane Society of the United States International Aid International Critical Incident Stress Foundation International Relief and Development (IRD) International Relief Friendship Foundation (IRFF) In addition to the contact named above, Gale C. Harris, Assistant Director; Deborah A. Signer, Analyst-in-Charge; William W. Colvin; Amanda M. Leissoo; and Lacy Vong made significant contributions to this report. In addition, Susan Bernstein provided writing assistance, Jessica Botsford and Doreen Feldman provided legal assistance, Walter Vance provided technical assistance, and Mimi Nguyen assisted with the graphics. The American Red Cross. From Challenge to Action: American Red Cross Actions To Improve and Enhance its Disaster Response And Related Capabilities For the 2006 Hurricane Season and Beyond (Washington, D.C.: June 2006). The Aspen Institute, Weathering the Storm: The Role of Local Nonprofits in the Hurricane Katrina Relief Effort (Washington, D.C.: 2006). Department of Homeland Security, Office of Inspector General, FEMA’s Preparedness for the Next Catastrophic Disaster, OIG-08-34 (Washington, D.C.: March 2008). Department of Homeland Security, Office of Inspector General, A Performance Review of FEMA’s Disaster Management Activities in Response to Hurricane Katrina, OIG-06-32 (Washington, D.C.: March 2006). National Council on Disability, The Impact of Hurricanes Katrina and Rita on People with Disabilities: A Look Back and Remaining Challenges (Washington, D.C.: Aug. 3, 2006) The Nelson A. Rockefeller Institute of Government and the Public Affairs Research Council of Louisiana, GulfGov Reports: Response, Recovery, and the Role of the Nonprofit Community in the Two Years Since Katrina and Rita (Albany, NY: 2007). United States House of Representatives, Select Bipartisan Committee to Investigate the Preparation for and Response to Hurricane Katrina, A Failure of Initiative (Washington, D.C.: Feb. 15, 2006). United States Senate Committee on Homeland Security and Governmental Affairs, Hurricane Katrina: A Nation Still Unprepared (Washington, D.C.: 2006). The Urban Institute. After Katrina: Public Expectation and Charities’ Response (Washington, D.C.: May 2006). The White House. The Federal Response to Hurricane Katrina: Lessons Learned (Washington, D.C.: February 2006). Emergency Management: Observations on DHS’s Preparedness for Catastrophic Disasters. GAO-08-868T. Washington, D.C.: July 11, 2008. Homeland Security: DHS Improved its Risk-Based Grant Programs’ Allocation and Management Methods, but Measuring Programs’ Impact on National Capabilities Remains a Challenge. GAO-08-488T. Washington, D.C.: March 11, 2008. National Disaster Response: FEMA Should Take Action to Improve Capacity and Coordination between Government and Voluntary Sectors. GAO-08-369. Washington, D.C: February 27, 2008. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-1142T. Washington, D.C.: July 31, 2007. Emergency Management: Most School Districts Have Developed Emergency Management Plans, but Would Benefit from Additional Federal Guidance. GAO-07-609. Washington, D.C.: June 12, 2007. Homeland Security: Preparing for and Responding to Disasters. GAO-07-395T. Washington, D.C.: March 9, 2007. Disaster Assistance: Better Planning Needed for Housing Victims of Catastrophic Disasters. GAO-07-88. February 2007. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery Systems. GAO-06-618. Washington, D.C.: September 2006. Hurricanes Katrina and Rita: Coordination between FEMA and the Red Cross Should Be Improved for the 2006 Hurricane Season. GAO-06-712. June 8, 2006. Homeland Security Assistance for Nonprofits: Department of Homeland Security Delegated Selection of Nonprofits to Selected States and States Used a Variety of Approaches to Determine Awards. GAO-06-663R. Washington, D.C.: May 23, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Emergency Preparedness and Response: Some Issues and Challenges Associated with Major Emergency Incidents. GAO-06-467T. Washington, D.C.: February 23, 2006. Statement by Comptroller General David M. Walker on GAO’s Preliminary Observations Regarding Preparedness and Response to Hurricanes Katrina and Rita. GAO-06-365R. Washington, D.C.: February 1, 2006. Hurricanes Katrina and Rita: Provision of Charitable Assistance. GAO-06-297T. Washington, D.C.: December 13, 2005. September 11: More Effective Collaboration Could Enhance Charitable Organizations’ Contributions in Disasters. GAO-03-259.Washington, D.C.: December 19, 2002. | Voluntary organizations have traditionally played a major role in the nation's response to disasters, but the response to Hurricane Katrina raised concerns about their ability to handle large-scale disasters. This report examines (1) the roles of five voluntary organizations in providing mass care and other services, (2) the steps they have taken to improve service delivery, (3) their current capabilities for responding to mass care needs, and (4) the challenges they face in preparing for large-scale disasters. To address these questions, GAO reviewed the American Red Cross, The Salvation Army, the Southern Baptist Convention, Catholic Charities USA, and United Way of America; interviewed officials from these organizations and the Federal Emergency Management Agency (FEMA); reviewed data and laws; and visited four high-risk metro areas--Los Angeles, Miami, New York, and Washington, D.C. The five voluntary organizations we reviewed are highly diverse in their focus and response structures. They also constitute a major source of the nation's mass care and related disaster services and are integrated into the 2008 National Response Framework. The Red Cross in particular--the only one whose core mission is disaster response--has a federally designated support role to government under the mass care provision of this Framework. While the Red Cross no longer serves as the primary agency for coordinating government mass care services--as under the earlier 2004 National Plan--it is expected to support FEMA by providing staff and expertise, among other things. FEMA and the Red Cross agree on the Red Cross's role in a catastrophic disaster, but it is not clearly documented. While FEMA recognized the need to update the 2006 Catastrophic Incident Supplement to conform with the Framework, it does not yet have a time frame for doing so. Since Katrina, the organizations we studied have taken steps to strengthen their service delivery by expanding coverage and upgrading their logistical and communications systems. The Red Cross, in particular, is realigning its regional chapters to better support its local chapters and improve efficiency and establishing new partnerships with local community-based organizations. Most recently, however, a budget shortfall has prompted the organization to reduce staff and alter its approach to supporting FEMA and state emergency management agencies. While Red Cross officials maintain that these changes will not affect improvements to its mass care service infrastructure, it has also recently requested federal funding for its governmental responsibilities. Capabilities assessments are preliminary, but current evidence suggests that in a worst-case large-scale disaster, the projected need for mass care services would far exceed the capabilities of these voluntary organizations without government and other assistance--despite voluntary organizations' substantial resources locally and nationally. Voluntary organizations also faced shortages in trained volunteers, as well as other limitations that affected their mass care capabilities. Meanwhile, FEMA's initial assessment does not necessarily include the sheltering capabilities of many voluntary organizations and does not yet address feeding capabilities outside of shelters. In addition, the ability to assess mass care capabilities and coordinate in disasters is currently hindered by a lack of standard terminology and measures for mass care resources, and efforts are under way to develop such standards. Finding and training more personnel, dedicating more resources to preparedness, and working more closely with local governments are ongoing challenges for voluntary organizations. A shortage of staff and volunteers was most commonly cited, but we also found they had difficulty seeking and dedicating funds for preparedness, in part because of competing priorities. However, the guidance for FEMA preparedness grants to states and localities was also not sufficiently explicit with regard to using such funds to support the efforts of voluntary organizations. |
The Results Act is the centerpiece of a statutory framework provided by recent legislation to bring needed improvements to federal agencies’ management activities. (Other parts of the framework include the 1990 Chief Financial Officers Act, the 1995 Paperwork Reduction Act, and the 1996 Clinger-Cohen Act.) Under the Results Act, every major federal agency must now ask itself some basic questions: What is our mission? What are our goals and how will we achieve them? How can we measure our performance? How will we use that information to make improvements? The act forces federal agencies to shift their focus away from such traditional concerns as staffing and activity levels and toward the results of those activities. is included in VBA’s business plan, also included in VA’s fiscal year 1999 budget submission. In previous testimony before this Subcommittee, we noted that VBA’s planning process has been evolving. VBA first developed a strategic plan in December 1994, which covered fiscal years 1996 through 2001. The plan laid out VBA’s mission, strategic vision, and goals. For example, the vocational rehabilitation and counseling (VR) goal was to enable veterans with service-connected disabilities to become employable and to obtain and maintain suitable employment. In addition, a program goal was to treat beneficiaries in a courteous, responsive, and timely manner. However, as VA’s Inspector General noted, VBA’s plan did not include specific program objectives and performance measures that could be used to measure VBA’s progress in achieving its goals. In fiscal year 1995, VBA established a new Results Act strategic planning process that included business process reengineering (BPR). VBA began developing five “business-line” plans that corresponded with its major program areas: compensation and pension, educational assistance, loan guaranty, vocational rehabilitation and counseling, and insurance. Each business-line plan supplemented the overall VBA strategic plan—which VBA refers to as its business plan—by specifying program goals that are tied to VBA’s overall goals. Also, each business-line plan identified performance measures that VBA intended to use to track its progress in meeting each plan’s goals. In VBA’s fiscal year 1998 budget submission, VBA set forth its business goals and measures, most of which were focused on the process of providing benefits and services, such as timeliness and accuracy in processing benefit claims. As with last year’s business plan, VBA’s fiscal year 1999 business plan continues to focus primarily on process-oriented goals and performance measures. VBA is, however, developing more results-oriented goals and measures for its five benefit programs. VBA officials consider this initial effort, which it hopes to complete by this summer, to be an interim step; final results-oriented goals and measures will be developed following program evaluations and other analyses, which VBA plans to conduct over the next 3 to 5 years. To help achieve its program goals, VBA has efforts under way to coordinate with other agencies that support veterans’ benefit programs; these efforts will need to be sustained to ensure quality service to veterans. VBA also faces significant challenges in setting clear strategies for achieving the goals it has established and in measuring program performance. For example, VBA considers its BPR efforts to be essential to the success of key performance goals, such as reducing the number of days it takes VBA to process a veteran’s disability compensation claim. VBA is, however, in the process of reexamining BPR implementation; at this point, it is unclear exactly how VBA expects reengineered processes to improve claims processing timeliness. VBA is also in the process of identifying and developing key data it needs to measure its progress in achieving specific goals. At the same time, VBA recognizes, and is working to correct, data accuracy and reliability problems with its existing management reporting systems. In its fiscal year 1999 business plan, VBA has realigned its goals and measures to better link with VA’s departmentwide strategic and performance plans. In keeping with the overall structure of VA’s strategic and performance plans, each business-line plan has been organized into two sections. The first section—entitled “Honor, Care, and Compensate Veterans in Recognition of Their Sacrifices for America”—is intended to incorporate VBA’s results-oriented goals in support of VA’s efforts to do just that. The second section, entitled “Management Strategies,” incorporates goals related to customer satisfaction, timeliness, accuracy, costs, and employee development and satisfaction. This structure more clearly highlights the need to focus on program results as well as on process-oriented goals. satisfaction with VBA’s efforts. VBA has also made some progress in developing results-oriented goals and measures for two of its five programs—VR and housing. In our assessments of VA’s strategic planning efforts, we determined that perhaps the most significant challenge for VA is to develop results-oriented goals for its major programs, particularly for benefit programs. As VBA notes in its business plan, the objective of the VR program is to increase the number of disabled veterans who acquire and maintain suitable employment and are considered to be rehabilitated. To measure the effectiveness of vocational rehabilitation program efforts to help veterans find and maintain suitable jobs, VBA has developed an “outcome success rate,” which it defines as the percentage of veterans who have terminated their program and who have met accepted criteria for program success. One major goal of VBA’s loan guaranty—or housing—program is to improve the abilities of veterans to obtain financing for purchasing a home. The outcome measure VBA established for this goal is the percentage of veterans who say they would not have been able to purchase any home, or would have had to purchase a less expensive home, without a VA-guaranteed loan. While the results-oriented goals and measures VBA has developed to date are a positive first step, they do not allow VBA to fully assess these programs’ results. The VR outcome success rate, for example, focuses only on those veterans who have left the program, rather than on all applicants who are eligible for program services. This success rate also does not consider how long it takes program participants to complete the program. In addition, by relying on self-reported data from beneficiaries, the housing outcome measure does not provide objective, verifiable information on the extent to which veterans are able to obtain housing as a result of VBA’s housing program. which veterans are using their earned education benefit, rather than on program results. One of the purposes of this program is to extend the benefits of a higher education to qualifying men and women who might not otherwise be able to afford such an education. A results-oriented goal would focus on issues such as whether the program indeed provided the education that the veteran could not otherwise have obtained. One measure VBA could use to assess its progress in achieving this goal would be the extent to which veterans have obtained a college degree or otherwise completed their education. In the past, VA has cited the lack of formal program evaluations as a reason for not providing results-oriented goals for many of its programs. Evaluations can be an important source of information for helping the Congress and others ensure that agency goals are valid and reasonable, providing baselines for agencies to use in developing performance goals and measures, and identifying factors likely to affect agency performance. VBA officials told us they now plan to develop results-oriented goals and measures for its three other programs—disability compensation and pensions, education benefits, and insurance coverage—by this summer. They consider these goals and measures—as well as those already developed for the VR and housing programs—to be interim, with final goals and measures to be developed following the completion of evaluations and analyses, which they plan to conduct over the next 3 to 5 years. In focusing on program results, VBA will need to tackle difficult questions in consultation with the Congress. For example, the purpose of the disability compensation program is to compensate veterans for the average loss in earning capacity in civilian occupations that results from injuries or conditions incurred or aggravated during military service. Given this program purpose, results-oriented goals would focus on issues such as whether disabled veterans are indeed being compensated for average loss in earning capacity and whether VBA is providing compensation to all those who should be compensated. However, we have reported that the disability rating schedule, which has served as a basis for distributing compensation among disabled veterans since 1945, does not reflect the many changes that medical and socioeconomic conditions may have had on veterans’ earning capacity over the last 53 years. Thus, the ratings may not accurately reflect the levels of economic loss that veterans currently experience as a result of their disabilities. Issues such as whether veterans are being compensated to an extent commensurate with their economic losses are particularly sensitive, according to VBA officials, and for that reason, they plan to consult with key stakeholders—including the Congress and veterans’ service organizations—over the next few months about the interim goals and measures VBA is developing. This will continue the consultative process, which VA officials, including those from VBA, began last year as part of VA’s efforts to develop a departmentwide strategic plan. As VBA develops more results-oriented goals and measures, it also needs to ensure that it is coordinating efforts with other parts of VA as well as federal and state agencies that support veterans’ benefits programs. For example, our work has shown that state vocational rehabilitation agencies, the Department of Labor, and private employment agencies also help veterans find employment once they have acquired all of the skills to become employable; VA has contracted for quality reviews of higher education and training institutions that have already been reviewed by the Department of Education; VBA relies on the Department of Defense for information about veterans’ military service, including their medical conditions, to help determine eligibility for disability compensation, vocational rehabilitation, and educational assistance programs; and in determining the eligibility of a veteran for disability compensation, VBA usually requires the veteran to undergo a medical examination, which is generally performed by a VHA physician. letter outlining their benefits and the requirements for maintaining their eligibility. VBA also is working with VHA to improve the quality of the disability exams VHA physicians conduct; the lack of adequate exams has been the primary reason why appealed disability decisions are remanded to VBA. VBA will need to continue to coordinate with the organizations that are critical to veterans’ benefits programs to ensure overall high-quality service to veterans. In addition to requiring an agency to identify performance goals and measures, the Results Act also requires that an agency highlight in its annual performance plan the strategies needed to achieve its performance goals. Without a clear description of the strategies an agency plans to use, it will be difficult to assess the likelihood of the agency’s success in achieving its intended results. A clear strategy would identify specific actions, including implementation schedules, that the agency was taking or planned to take and how these actions would achieve intended results. VBA is in the early stages of developing clear and specific strategies. While it has identified numerous functions and activities as its strategies, VBA has not clearly demonstrated how these efforts will lead to intended results. For example, in its current business plan, VBA consistently refers to BPR as the key to achieving its performance goals. VBA states that with the implementation of BPR, it will reduce the time it takes to complete an original claim for compensation to an average of 53 days from the current estimate of 106 days. However, VBA does not describe the specific actions needed, set a timetable for implementing needed changes, or show a clear link between BPR initiatives and reduced processing times. According to VBA officials, efforts to implement BPR are still under way and are now being reassessed. A major challenge VBA faces in developing clear and specific strategies for achieving performance goals will be effectively using BPR to identify what actions are needed to achieve performance goals and explain how these actions will lead to the intended results. Results Act, agencies are expected to use the performance and cost data they collect to continuously improve their operations, identify gaps between their performance and their performance goals, and develop plans for closing performance gaps. However, in developing its performance measures, VBA has identified numerous data gaps and problems that, if not addressed, will hinder VBA and others’ ability to assess VBA’s performance and determine the extent to which it is achieving its stated goals. For example, one goal is to ensure that VBA is providing the best value for the taxpayers’ dollar; however, VBA currently is unable to calculate the full cost of providing benefits and services to veterans. VBA’s ability to develop complete cost information for its program activities hinges on the successful implementation of its new cost accounting system, Activity Based Costing, currently under development. In addition, VBA plans to measure and assess veterans’ satisfaction with the programs and services VBA provides. The data VBA needs to make this assessment, however, will not be available until VBA implements planned customer satisfaction surveys for two of its five programs—VR and educational assistance. In addition, VBA’s recently appointed Under Secretary for Benefits has raised concerns about the accuracy of data contained in VBA’s existing management reporting systems. Moreover, completed and ongoing IG audits have identified data system internal control weaknesses and data integrity problems, which if not corrected will undermine VBA’s ability to reliably measure its performance. In its fiscal year 1996 audit of VA’s financial statements, for example, the Inspector General reported that the accounting system supporting the housing program does not efficiently and reliably accumulate financial information. The Inspector General believes the system’s deficiencies have the potential to adversely affect VBA’s ability to accurately and completely produce reliable financial information and to effectively audit system data. Also, an ongoing IG audit appears to have identified data integrity problems with certain performance data, according to VBA officials. Specifically, in assessing whether key claims processing timeliness data are valid, reliable, and accurate, IG auditors found instances where VBA regional office staff were manipulating data to make their performance appear better than it in fact was. VBA officials told us they are in the process of assessing the data system’s vulnerabilities so they can take steps to correct the problems identified. Mr. Chairman, this completes my testimony this morning. I would be pleased to respond to any questions you or Members of the Subcommittee may have. Agencies’ Annual Performance Plans Under the Results Act: An Assessment Guide to Facilitate Congressional Decisionmaking (GAO/GGD/AIMD-10.1.18, Feb. 1998). Vocational Rehabilitation: Opportunities to Improve Program Effectiveness (GAO/T-HEHS-98-87, Feb. 4, 1998). Managing for Results: Agencies’ Annual Performance Plans Can Help Address Strategic Planning Challenges (GAO/GGD-98-44, Jan. 30, 1998). The Results Act: Observations on VA’s August 1997 Draft Strategic Plan (GAO/T-HEHS-97-215, Sept. 18, 1997). The Results Act: Observations on VA’s June 1997 Draft Strategic Plan (GAO/HEHS-97-174R, July 11, 1997). Veterans Benefits Administration: Focusing on Results in Vocational Rehabilitation and Education Programs (GAO/T-HEHS-97-148, June 5, 1997). The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997). Veterans’ Affairs: Veterans Benefits Administration’s Progress and Challenges in Implementing GPRA (GAO/T-HEHS-97-131, May 14, 1997). Veterans’ Employment and Training Service: Focusing on Program Results to Improve Agency Performance (GAO/T-HEHS-97-129, May 7, 1997). Agencies’ Strategic Plans Under GPRA: Key Questions to Facilitate Congressional Review (GAO/GGD-10.1.16, ver. 1, May 1997). Managing for Results: Using GPRA to Assist Congressional and Executive Branch Decisionmaking (GAO/T-GGD-97-43, Feb. 12, 1997). VA Disability Compensation: Disability Ratings May Not Reflect Veterans’ Economic Losses (GAO/HEHS-97-9, Jan. 7, 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Veterans Benefits Administration's (VBA) implementation of the Government Performance and Results Act of 1993. GAO noted that: (1) VBA continues to make progress in setting goals and measuring its programs' performance but faces significant challenges in its efforts to successfully implement the Results Act; (2) VBA has efforts under way to address these challenges, which if continued will help ensure success; (3) for example, VBA is in the process of developing results-oriented goals and measures for each of its programs in response to concerns that GAO and others have raised; (4) developing more results-oriented goals and measures will require VBA to address difficult and sensitive questions regarding specific benefit programs, such as whether disabled veterans are being compensated appropriately under the existing disability program structure; (5) to address these questions, VBA is continuing its consultations with Congress, begun last year in conjunction with the Department of Veterans Affairs (VA) strategic planning efforts; (6) VBA also has efforts under way to coordinate with agencies that support veterans' benefits programs, such as the Department of Defense, in achieving specific goals; (7) to successfully implement the Results Act, VBA must also develop effective strategies for achieving its performance goals and ensure that it has accurate, reliable data to measure its progress in achieving these goals; (8) VBA is in the early stages of developing clear and specific strategies but has not yet clearly demonstrated how these strategies will help it achieve the intended results; (9) morever, VBA does not yet have the data needed to effectively measure its performance in several key areas; (10) for example, one goal is to ensure that VBA is providing the best value for the taxpayer dollar; however, VBA currently is unable to calculate the full cost of providing benefits and services to veterans; (11) in addition, VBA officials and VA's Inspector General (IG) have raised concerns about the accuracy of data VBA is currently collecting; (12) for example, completed ongoing IG audits have identified data integrity problems with VBA's claims processing timeliness data; and (13) VBA is currently determining how best to address these concerns. |
In 1992, the U.S. government initiated a program known as the Lisbon Initiative on Multilateral Nuclear Safety. This program is designed to help improve the safety of Soviet-designed reactors. The U.S. program is part of a larger international effort to improve the safety of these reactors. As of February 1996, 22 donors, including the United States, had pledged or contributed more than $1.4 billion in assistance to this effort. The 60 operational Soviet-designed nuclear power reactors pose significant risks because of deficiencies in their design and operation. Of greatest concern are 26 of these reactors that western safety experts generally agree fall well below accepted international safety standards and cannot be economically upgraded. These include 15 reactors known as RBMKs and 11 reactors known as VVER 440 Model 230s. The RBMK reactors—considered the least safe by western safety experts—and VVER 440 Model 230 reactors are believed to present the greatest safety risk because of inherent design deficiencies, including the lack of a containment structure, inadequate fire protection systems, unreliable instrumentation and control systems, and deficient systems for cooling the reactor core in an emergency. Most of these reactors are located in countries that do not have independent or effective nuclear regulatory bodies to oversee plant safety. Figure 1 shows the type and location of the 60 Soviet-designed reactors operating in the Newly Independent States and in central and eastern Europe. Several federal agencies share responsibility for the U.S. nuclear safety assistance program. The Department of State provides overall policy guidance, with assistance from the U.S. Agency for International Development (USAID). DOE is responsible for implementing projects involving training, operational safety, and safety-related equipment. Three of DOE’s national laboratories support the program. The Pacific Northwest National Laboratory provides the primary management support, and along with the Brookhaven National Laboratory and the Argonne National Laboratory, manages specific safety projects. NRC is responsible for assisting the recipient countries’ nuclear regulatory organizations. The goals of the U.S. safety assistance program have remained the same since its inception—encouraging the shutdown of the highest-risk Soviet-designed nuclear power reactors and reducing the risk of accidents. However, none of these reactors have been closed, and one has been restarted. In addition, DOE’s portion of the program has evolved and expanded to cover a broader range of safety projects. Although the United States remains committed to securing the closure of the highest-risk nuclear power reactors, none have been closed. Furthermore, Armenia recently restarted one of its VVER 440 Model 230 reactors. Department of State officials told us that progress has been made in getting closure agreements for some reactors in Bulgaria, Lithuania, and Ukraine, but these officials also recognize that it may be difficult for these countries to meet these agreements on a timely basis. For example, Bulgaria has agreed to a phased shutdown of its highest-risk reactors by 1998 as adequate replacement energy, such as hydroelectric power, becomes available. Department of State officials said the closure date will probably be delayed by 2 or 3 years because, among other things, the pace of economic reform in Bulgaria has been slow. In 1995, the G-7 nations (major industrialized nations) and Ukraine signed an agreement that includes Ukraine’s commitment to close the Chornobyl nuclear power reactor by 2000. State Department and USAID officials said that successful implementation of the agreement hinges on Ukraine’s progress in reforming the energy sector. Such progress is key to continuing the international financial assistance that may ease the impact of closure. DOE, national laboratory, and Department of State officials acknowledge that many of the highest-risk reactors may continue to operate for several more years. Many factors complicate U.S. and western nations’ efforts to obtain early closure of the highest-risk reactors, including (1) a lack of consensus, particularly among Russian nuclear safety experts, about the safety of their reactors; (2) concerns about the social and economic well-being of workers who would be displaced if reactors were closed; (3) a commitment to expanding nuclear power, particularly in Russia, to meet future energy needs; and (4) the need to obtain financing to support the development of replacement energy. DOE’s Director of the Office of Nuclear Energy, Science, and Technology said that Russia does not intend to close its highest-risk reactors for many years, and he believes that the United States should continue to provide assistance so that these reactors can operate as safely as possible until they are closed. Department of State and USAID officials also noted that it is sound policy to continue to reduce the risks of accidents at the highest-risk reactors. In keeping with this policy, DOE plans to increase technical assistance to RBMK plants, including Chornobyl. The Chornobyl initiative is part of a multinational effort to provide safety upgrades that can be completed quickly. DOE is planning to spend about $13.8 million at the Chornobyl nuclear power plant, including upgrades for fire safety at one of the two operating reactors and instituting operational safety and training programs for both reactors. In total, DOE plans to spend about $33 million for safety parameter display systems for plants with RBMK reactors. This system provides information, which is displayed on a monitor, to operators about plant conditions that are important for safety. In addition, DOE plans to spend about $8.5 million on a project to transfer western maintenance practices, training methods, and technology to staff at RBMK reactors. DOE officials noted that the project will not extend the life of the RBMK reactors but will improve safety. Our review of the RBMK maintenance initiative indicates that the repair or replacement of any component that a plant relies on would support the plant’s continued operations. While these efforts will not by themselves extend the lifetime of the plant, they will serve to keep the plant’s components in service longer. In 1993, the Chairman of NRC at that time said that it is difficult to draw a fine line between short-term safety improvements and upgrades that could encourage a plant’s operator to think in terms of long-term operations. DOE’s program was initially viewed as short- to mid-term, totaling between $25 million and $40 million, with additional funding planned of about $100 million. However, DOE’s portion of the program has grown because of the complexities and challenges involved in improving the safety of Soviet-designed plants. As of March 1996, DOE had initiated more than 150 projects in this program, and DOE’s Director of the Office of Nuclear Energy, Science, and Technology told us that approximately $500 million would be required over the next 10 years to address the program’s long-term safety and training needs. (According to Department of State and USAID officials, this estimate has not been agreed upon by other U.S. government agencies participating in the program.) In contrast, NRC views its regulatory assistance program, totaling about $28 million, as limited in terms of its size and scope. DOE and Pacific Northwest National Laboratory officials said that while it remains important to address short-term safety problems at the plant level, it is equally important to approach safety at a systemic level to help bring about sustainable improvements. As a result, the program has placed increased emphasis on transferring U.S. technology and encouraging the recipient countries to analyze and fix their own safety problems. According to DOE officials, the technology transfer aspect of the program is having a positive impact. DOE has developed a short- to mid-term plan that provides an overview of the program’s objectives, performance measurements and ongoing projects. Although they plan to do so, DOE officials have not yet established a long-term plan linking the program’s objectives to measurable goals or providing a date for meeting those goals. As a result, it is unclear how DOE will demonstrate when and how it has achieved the program’s goals. It is also unclear, without such a plan, when the program will end. According to DOE’s Deputy Associate Director for International Nuclear Safety, the 10-year approach is based on an intuitive view of the time needed to complete the program’s overall objectives. DOE officials told us in September 1996 that the agency has begun to develop a plan that will link objectives to goals and set a date for achieving these goals. DOE and NRC have received $208 million for their programs in the Newly Independent States and countries of central and eastern Europe. USAID has provided about 80 percent of the total funds received through various interagency agreements with DOE and NRC. The remainder of the funds has come from DOE ($30 million) and the Department of Defense ($11 million). Beginning in fiscal year 1996, DOE began receiving direct appropriations for the program but is still obtaining some funds from USAID for special projects, such as Chornobyl. (App. I provides greater detail on DOE’s and NRC’s costs for the safety assistance program and fig. I.1 provides information about the Chornobyl project.) The U.S. nuclear safety program has focused on several types of assistance, including management and operational safety, engineering and technology (including fire safety and other plant-specific improvements), plant safety evaluations, and regulatory enhancements. Operational improvements can be implemented at all plants regardless of reactor type. Plant-specific measures are generally directed toward reactors such as the oldest RBMKs and VVER 440 Model 230s. As figure 2 shows, the greatest percentage of the funds—36 percent—has or will be spent on management and operational safety, which includes training and safety procedures. As of March 31, 1996, DOE had obligated $119.2 million and had spent $78.3 million of the $180.1 million it had received. NRC had obligated $16.5 million and spent $11 million of the $27.9 million received. (See table I.1). Of the combined agencies’ expenditures, $42.2 million was for nuclear safety equipment and other products. More than half of the $42.2 million was for training or training-related items, such as simulators. Less than one-third of this amount was for safety-related hardware, such as fire safety or other plant-specific equipment. (See fig. I.2). Other program-related expenditures were for labor, travel, overhead, and other costs. (See tables I.2 and I.3). DOE, NRC, and Pacific Northwest National Laboratory officials recognize that their obligation and expenditure rates for their respective programs—particularly in Russia and Ukraine—have lagged over time. They stated, however, that several factors have contributed to the delays, including (1) concerns about nuclear liability in the United States that led to a change in DOE’s program management and stalled many projects;(2) logistical problems with establishing assistance programs in Russia and Ukraine; (3) the need to develop working relationships with Russian and Ukrainian organizations, some of which have experienced significant turnover and/or attrition of key personnel; and (4) procurement delays in the United States. DOE, national laboratory, and NRC officials noted concerns about their programs’ unobligated balances. As of March 31, 1996, DOE’s unobligated balance was about $61 million and NRC’s was about $11 million. DOE intended to obligate all currently unobligated funds by September 30, 1996. While NRC has obligated 88 percent of its funds for central and eastern Europe, its obligation rates for Russia and Ukraine are significantly lower. NRC expects to obligate its available funds, primarily for Russia and Ukraine, over the next few years. Of the 13 safety projects we reviewed, 11 have been delayed in their implementation and one has been completed. Projects have been delayed largely because of difficulties in getting U.S.-supplied equipment cleared through Russian and Ukrainian customs officials. While some of these difficulties continue, several projects are moving forward. Despite the recent progress, it is too early to assess the impact of these projects on safety. Eleven of the 13 safety projects implemented by DOE and NRC have been delayed, and 3 are more than 2 years behind schedule. At the time of our review, one project, a study of nuclear energy options for Russia, had been completed. The study concluded that, among other things, it was in Russia’s economic interest to upgrade some of its operating nuclear power reactors and to close and decommission some of its higher-risk reactors. (See app. II for a summary of the projects and the reasons for the delays). A number of factors have delayed the implementation of these projects, including (1) problems with customs, (2) foreign officials’ imposition of unanticipated and/or burdensome requirements, and (3) the inability of Russia and Ukraine to provide adequate financial support for some projects. Despite these impediments, several projects are now progressing more quickly. (DOE’s projects are discussed in detail in app. III, and NRC’s are discussed in app. IV.) For the 13 projects we reviewed, DOE has been requested to pay at least $505,000 in unanticipated costs. These costs include $442,000 to replace or refurbish unusable simulator parts in Ukraine, $34,000 to store U.S. equipment in European warehouses pending the resolution of customs problems in Russia, $26,000 for airfare to enable Ukrainians working on a year-long simulator project in the United States to return to Ukraine or to have their spouses visit them in the United States, and $3,000 to the Ukrainian customs organization for fees to authorize the release of equipment. In 6 of the 13 safety projects we reviewed, Russian and/or Ukrainian customs officials did not release U.S. equipment to the nuclear power plants in a timely manner. Under the terms of agreements that the United States entered into with Russia and Ukraine, this equipment is to enter into these countries duty-free. Local customs officials in Russia and Ukraine have not consistently recognized the duty-free status of this equipment. Department of State and USAID officials told us that other U.S. assistance programs in the Newly Independent States have experienced similar problems. Customs problems have included the following: Russian customs officials impounded 100 fire suits and related fire safety equipment, valued at $110,000, until customs duties were paid. This equipment was destined for the Smolensk nuclear power plant. Since no duties were paid, customs officials turned the equipment over to a Russian court, which donated the equipment to a local fire company. Realizing it could not use the equipment, the fire company eventually sent the gear to the nuclear power plant, about 1 year after it had been shipped to Russia. A U.S. contractor placed emergency batteries and related equipment in a storage facility in the Netherlands for several months pending the resolution of customs problems in Russia. The shipper requested the reimbursement of about $11,300 for storage costs, but DOE had not paid these costs at the time of our review. Russian customs officials have been holding a sample high-temperature suit and related equipment, valued at about $26,000, for over 2 years. The equipment, which was examined by Smolensk nuclear power plant officials in May 1994, is no longer considered useful by DOE except for demonstration purposes. As a result, DOE has not pressed Russian authorities to release it. No customs duties or storage fees have been imposed. Fire-retardant material, valued at $23,650, was stored in Finland for several months pending the resolution of customs problems. The shipper has claimed about $23,000 in storage costs, and this claim is being reviewed by the Pacific Northwest National Laboratory. The Department of State’s Senior Coordinator for Nuclear Safety Assistance said that customs problems have been raised with senior-level U.S. and Russian officials. Customs difficulties have been assigned to the Science and Technology Committee of the Gore-Chernomyrdin Commission for resolution. The Vice President has repeatedly mentioned his concerns to Russian Prime Minister Chernomyrdin, and both DOE and the Department of State have attempted to find a generic solution to the customs issue. However, pending such a resolution, case-by-case arrangements will still be required. Recently, DOE and Pacific Northwest National Laboratory officials have focused greater attention on resolving customs problems. In February 1996, DOE and Ukrainian authorities agreed to a standardized process under which nuclear safety-related equipment would be cleared duty-free by customs in Ukraine. Laboratory officials said that some equipment had recently been shipped successfully to Russian plants using the U.S. Embassy in Moscow to facilitate the process, although these officials do not consider this approach to be a long-term solution. Some U.S. industry officials also noted that customs problems have decreased in recent months. Because standardized customs procedures do not yet exist in either Russia or Ukraine, Pacific Northwest National Laboratory officials recognize that problems may continue to occur. These officials noted that their representative in Russia does, among other things, help resolve customs problems by working with Russian officials. Although the Pacific Northwest National Laboratory has a representative in Ukraine, his responsibilities are narrowly defined, and he does not routinely monitor customs issues. A Laboratory official said that the Ukraine representative’s position may be expanded to more closely resemble the responsibilities of the program’s representative in Russia. In addition to customs problems, other factors have contributed to delays affecting 5 of the 13 projects we reviewed. For example, DOE’s project to assist Russia in developing emergency operating instructions at a pilot nuclear power plant was delayed, in part, because the Russian organizations responsible for approving the instructions have been slow to act and have not given the project priority status. Although the instructions were drafted in 1992, a lengthy process of verification, validation, training, and regulatory approval delayed the implementation of a partial set of instructions until mid-1996. In another case, Russian authorities insisted that some U.S. fire safety equipment planned for shipment to the Smolensk plant had to be tested and certified in Russia. This equipment had already been approved for U.S. nuclear power plants. Testing of the equipment was delayed for several months because of disagreements over funding. During this period, Russian authorities refused to allow the U.S. contractor to visit the plant until the matter was resolved. A few U.S. contractors told us that DOE and the Department of State have not always been aggressive enough in helping resolve problems. In the cases of the emergency operating instructions and the fire safety equipment for the Smolensk plant, these contractors believed that DOE should have been more active in working with the appropriate Russian organizations to help resolve project delays sooner. One contractor noted that higher-level DOE officials needed to work more closely with key Russian officials to demonstrate the U.S. government’s commitment to the projects’ success. He noted that a key DOE nuclear safety official helped move the emergency operating instruction project forward after he had discussed the project with Russian officials from Rosenergoatom, the organization responsible for most nuclear power plants’ operations in Russia. In 3 of the 13 projects, Ukraine and Russia have been unable to adequately finance or support their share of the project. For example, a full-scope simulator for the Khmelnytskyy nuclear power plant in Ukraine, valued at $12.7 million, has been delayed partly by the plant officials’ inability to fulfill their commitments. As part of the project, the plant had agreed to provide certain components to be integrated into the simulator. However, the parts the plant provided were corroded. DOE paid for the replacements. In addition, the plant-supplied control room panels did not match the panels needed for the simulator; DOE paid for their modification as well. The simulator project—as well as other projects for which the United States has agreed to cover additional costs—raises questions about the ability of host countries to meet commitments in other ongoing and planned cost-sharing projects for the safety program. For example, other simulator projects for Russia and Ukraine are being developed on a cost-sharing basis with DOE. The total estimated DOE contribution to these simulator projects is about $24 million; Ukraine’s contribution is about $12 million; and Russia’s contribution is about $7.5 million. DOE’s simulator project manager said that some of the recipient countries’ contributions will be “in-kind” contributions of labor, rather than financial outlays. However, he said that DOE recognizes that these projects are risky and that DOE may have some additional costs associated with them. In October 1994, a USAID Inspector General’s report raised similar concerns about cost-sharing ventures. The report recommended that USAID, in coordination with DOE, ensure the development of procedures defining and documenting the role and use of U.S. government funding vis-a-vis host countries’ contributions. According to DOE, the work plans for each project now include a description of the host country’s expected contribution, which in many cases covers labor costs. Despite delays, some projects have shown results because the pace of implementing a number of these safety projects has accelerated in recent months. For example, DOE and Brookhaven officials told us that the cadre of trainers at the Balakovo training center in Ukraine has grown from less than 10 to about 70 since the project began and that the plant’s management is committed to the training program. In another project, NRC has worked closely with Russia’s nuclear regulatory body, Gosatomnadzor, to develop a legislative basis for nuclear regulation and legal enforcement. NRC and Gosatomnadzor officials view this initiative as a significant first step toward the creation of an independent nuclear regulatory body in Russia. Furthermore, we were told that a significant amount of fire safety equipment has recently been delivered and installed at the Zaporizhzhya nuclear power plant in Ukraine and emergency power equipment has recently been installed at the Kola plant in Russia. It is too soon to assess the extent to which the projects we reviewed are improving safety in Russia and Ukraine because most of these projects have not been completed. However, DOE, NRC, and national laboratory officials—as well as Russian and Ukrainian officials we met with—believe that the projects are beneficial. For example, DOE officials believe the fire safety equipment will reduce the likelihood of fires and improve detection and fire-fighting capabilities. A Russian official from the Smolensk nuclear power plant told us that the U.S. fire suits have increased fire fighters’ confidence. DOE officials said that they are attempting to measure safety improvements and to establish meaningful performance measures for the program. However, they said that the impact on plant safety of training, procedures, and changes in the safety culture is not clearly measurable. DOE’s Director, Office of Nuclear Energy, Science and Technology and officials at the Pacific Northwest National Laboratory said that the lack of reliable baseline safety data makes it impossible for DOE to quantify the extent to which safety has been improved. Laboratory officials believe that measurable safety improvements may take 2 to 5 years. DOE has established performance measures that primarily gauge performance in the technical work areas of the program by accounting for the number of plants or plant operators carrying out various tasks or projects. However, DOE has not yet reported on the results of these specific measurements. The Pacific Northwest National Laboratory is also attempting to gauge the impact of its program by gathering anecdotal evidence of improvements in nuclear safety. NRC has established results-based measurements that will be used to evaluate its regulatory assistance program. The U.S. nuclear safety assistance program has evolved into a longer-term effort than initially envisioned, but DOE has not yet developed a plan that reflects this effort. Although DOE has estimated that it will need $500 million over the next 10 years, it has not articulated how it will achieve its objectives over this period. DOE’s development of such a plan—which would link the program’s goals with anticipated costs, outcomes, and time frames—would go a long way towards gauging how the Department’s assistance is contributing to the improved safety of Soviet-designed reactors. It would also serve to provide a better estimate of how much assistance is required to meet the program’s objectives. The U.S. nuclear safety program has faced many challenges and impediments, such as the lack of a standardized customs process in Russia and Ukraine. While DOE has taken a more active role in resolving customs matters, this problem persists, contributes to delays, and increases the program’s costs. The Pacific Northwest National Laboratory has placed a program representative in Russia who helps resolve customs problems. Because numerous customs problems have occurred in Ukraine, we believe that a Laboratory representative in Ukraine could provide similar program assistance. To improve the management of the nuclear safety assistance program, we recommend that the Secretary of Energy take the following actions: Develop a strategic plan that (1) clearly links the program’s goals and objectives to performance measurements, (2) provides well-defined time frames for completing the program, and (3) projects the anticipated funds required to meet the program’s specific objectives, including the estimated U.S. contributions to cost-sharing arrangements that take into account the recipient countries’ ability to realistically meet resource commitments. Facilitate the timely and duty-free delivery of U.S. safety equipment to nuclear power plants in Ukraine. Specifically, the Pacific Northwest National Laboratory’s in-country representative in Ukraine should, as part of this position’s assigned duty, work with the appropriate government authorities to resolve customs problems should the position assume broader responsibilities in the future. Part of this monitoring responsibility could include periodic visits to the nuclear power plants in Ukraine. We provided copies of a draft of this report to the Departments of Energy and State, USAID, and NRC. The Department of State and NRC generally agreed with the report’s findings and provided clarifying information that we have incorporated into our report, as appropriate. DOE and USAID provided written comments. DOE disagreed that the U.S. assistance program posed a dilemma because it may encourage the continued operation of the same reactors that the United States wants to see closed as soon as possible. DOE noted that the U.S. equipment being provided does not extend the life of the reactors. We did not assert that the equipment would extend the operating life of the reactors, but we believe that certain types of equipment could be used to support the continued operation of higher-risk nuclear power plants. For example, DOE’s RBMK maintenance initiative provides the equipment, training, and technology that enables the plant’s components to remain in service longer, thus supporting the plant’s continued operations while improving plant safety. (See app. VI for DOE’s comments and our response.) USAID noted that our report (1) understated the overall level of progress being made by the U.S. assistance program and (2) gave the impression that no progress is being made toward obtaining the closure of the highest-risk Soviet-designed reactors. Regarding the first point, our report noted that several projects in our sample had made progress, resulting in, for example, the installation of safety-related hardware. However, we also noted that it was premature to assess the impact of these projects because only one had been completed. Regarding the second point, our report provided several examples of closure commitments that have been made but also stated that it will be difficult for the countries to meet specific closure dates. We also noted that, to date, no reactors have been closed, and one was recently restarted. (See app. VII for USAID’s comments and our response.) To address our objectives, we interviewed officials and obtained documentation from the Department of State, USAID, DOE, NRC, and the Brookhaven and Pacific Northwest National Laboratories. We also met with some government officials and nuclear power plant personnel from Russia and Ukraine. We reviewed 13 of approximately 196 ongoing DOE and NRC nuclear safety projects to determine how they are being implemented and are contributing to improved nuclear safety. Agency officials agreed that our selection included projects that represent the safety program’s highest priorities. Our scope and methodology are discussed in detail in appendix V. We performed our review from January 1996 through August 1996 in accordance with generally accepted government auditing standards. We plan no further distribution of this report until 15 days from the date of this letter unless you publicly announce its contents earlier. At that time, we will send copies of this report to other interested congressional committees, the Secretaries of State and Energy, the Chairman of NRC, the Administrator of USAID, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others on request. Please contact me at (202) 512-3841 if you or your staff have any questions. Major contributors to this report are listed in appendix VIII. This appendix provides detailed information on the Department of Energy’s (DOE) and the Nuclear Regulatory Commission’s (NRC) planned and actual costs for the U.S. nuclear safety assistance to improve the safety of Soviet-designed nuclear reactors. Percent of funds received spent Note : Does not include U.S.contributions to the Nuclear Safety Account totaling $25 million. This account is administered by the European Bank for Reconstruction and Development. It also does not include $300,000 for NRC projects in Kazakhstan and Armenia. Central and Eastern European Countries. DOE - DOE Headquarters Activity PNNL - Pacific National Northwest Laboratory BNL - Brookhaven National Laboratory ANL - Argonne National Laboratory CH - Chicago Area Office BAO/ORNL - Brookhaven Area Office and Oak Ridge National Laboratory Includes salaries, wages, and pensions that are directly chargeable to the international nuclear safety program. DOE headquarters’ employees’ salaries are not charged directly to the program. Includes travel and per diem costs—foreign or domestic—of DOE and laboratory officials. Does not includes travel and per diem costs of foreign nationals under the program; these costs are included in “materials/subcontracts.” Includes directly applicable purchase orders, subcontracts, and consulting services. Contractor labor, travel, and overhead charges are included in this category. Also included in this amount is $38.8 million in safety-related equipment and products. Includes the costs of certain centralized services, including translation of documents. Includes charges for organizational overhead, general and administrative expenses, and service assessments. $3.5 Delayed. Plan to complete in 9/1996. 2.1 Delayed. Plan to complete in 12/1996. 4.5 Delayed. Plan to complete in 11/1996. 1.8 Delayed. Plan to complete in 12/1996. 13.5 Delayed. Plan to complete in 12/1998. 6.6 Delayed. Plan to complete in 9/1996. 12.7 Delayed. Plan to complete in 11/1997. 9.7 Plan to complete in 6/1997. 6/1995. .6 Delayed. Plan to complete in 12/1996. 1.5 Delayed. Plan to complete in 9/1997. complete in 6/1999. 2.5 Delayed. Plan to complete in 5/1998. This appendix discusses nine nuclear safety projects we reviewed that DOE has completed or is implementing in Russia and Ukraine. These projects, which have a total value of about $56 million, include a training center in Russia, a full-scope simulator in Ukraine, emergency power systems in Russia, fire safety equipment in Russia and Ukraine, spent fuel storage in Ukraine, emergency operating instructions for Soviet-designed nuclear reactors, and a study of nuclear energy options for Russia. The Balakovo training center in Russia is one of the two regional training centers established by the 1992 Lisbon Initiative on Multilateral Nuclear Safety. The purpose of the training center is to teach a systematic approach to training—the measurable, performance-based training program used for U.S. nuclear power plant personnel—to Balakovo plant personnel. The systematic approach to training is used as a method to develop or improve training programs for operations, maintenance and other technical support personnel. DOE expects that this training approach will eventually be transferred from Balakovo to the other VVER-1000 designed plants in Russia. As of March 31, 1996, DOE had allocated $9.4 million for the Balakovo training center and had obligated $9.7 million. About $4.3 million had been spent, as seen in table III.1. The Balakovo training center project was started in April 1993 and is expected to be completed by June 1997. However, training for other VVER 1000 plant operators in Russia may continue beyond 1997. According to DOE and Brookhaven officials and representatives from Sonalysts, Inc. (the U.S. contractor), no major delays have occurred in implementing the project because of the high level of cooperation from Balakovo plant management and Russian authorities. By working cooperatively with Balakovo plant personnel, DOE, Brookhaven, and Sonalysts, Inc. developed 12 job-specific training courses and 6 general training courses for the plant. As of May 1996, 6 of the 12 job-specific training programs and 3 of the 6 general courses had been completed. Along with the training, DOE is providing both course-specific equipment and equipment for the training center, such as circuit boards, soldering work stations, laser alignment equipment, computers, printers, and audiovisual equipment. Equipment shipped from the United States to the Balakovo training center has experienced some customs problems. According to the Brookhaven project manager, in one instance the plant paid a substantial amount to get a shipment of equipment out of customs storage. Two shipments of spare parts for a full-scope simulator for the training center, valued at $45,500, have been held in U.S. airport storage since August and September 1995 awaiting approval from Rosenergoatom (REA), for shipment. However, this Brookhaven official told us that, more recently, some equipment had been shipped successfully to the plant by using the U.S. Embassy in Moscow to facilitate clearance. For example, about $22,000 in soldering equipment that had been held in storage in Helsinki, Finland, since July 1995, pending resolution of the customs issue, was delivered to the plant in April 1996. DOE considers the Balakovo training center to be a success because of the plant management’s commitment to the training. In a March 1996 letter to DOE, the Balakovo plant manager stated that the training program was a success because of the effective interaction between the Russian and U.S. sides and the fact that the training approach had been applied to specific conditions at the plant. According to the Brookhaven project manager, the size of the Balakovo training center staff has grown from less than 10 in 1992-93 to about 70 in 1996, and instructors’ salaries are the same as plant operators. As of March 1996, about 600 personnel at Balakovo had been trained on material developed with U.S. assistance. Plant personnel have also begun to develop training material for additional duty areas and to train other personnel. Approximately 5,000 plant personnel require job-specific training. In a November 1995 meeting with DOE, officials from Russia’s Ministry of Atomic Energy (Minatom) said that they were pleased with the training program at Balakovo and wanted to adopt the training method to other VVER-designed plants in Russia. In April 1996, Brookhaven’s project manager met with Russian officials and agreed to a plan to transfer the training approach to other Russian plants. Because Balakovo does not have the resources to transfer the training approach to other plants, DOE agreed to continue to assist Russian organizations and Balakovo with additional training. The other regional training center in Khmelnytskyy, Ukraine, established by the 1992 Lisbon Initiative, will feature a computer-based simulator for the VVER-1000 reactor, as seen in figure III.1. DOE is purchasing a full-scope simulator for the Khmelnytskyy training center so that plant personnel can upgrade and maintain their operational skills in dealing with routine and abnormal events at the plant. DOE officials and representatives from S-3 Technologies, Inc. (the U.S. contractor) believe that the transfer of simulator technology to Ukrainian personnel will have long-term benefits. The simulator technology is being transferred to a Ukrainian team that will maintain and modify the Khmelnytskyy simulator and build and maintain simulators for the other nuclear power plants in Ukraine. As a part of this technology transfer, a Ukrainian company has learned how to manufacture control room panels, a key component of the simulator, that meet U.S. standards. The United States and Ukraine are working together to develop the simulator. S-3 Technologies, Inc. is designing, developing, testing, and installing the simulator in Ukraine and is providing training and support to the technology transfer team. As a part of the simulator development activities, the Ukrainians agreed to provide plant data, control room panels, and instruments, and to host a U.S. team in Ukraine for about a year. Brookhaven National Laboratory subcontracted with a Ukrainian company to modify the control room panels for the simulator. The Ukrainians also agreed to construct a building at the Khmelnytskyy training center to house the simulator. As of March 31, 1996, DOE had allocated $11.7 million for the simulator project against a total estimated cost of $12.7 million. It had obligated $11.7 million and had spent $8.5 million of this amount. Table III.2 shows these expenditures in greater detail. Most of the funding has come from the Department of Defense, which transferred $11 million to Brookhaven to build the simulator in November 1994. DOE allocated an additional $500,000 for development of the project specifications. Because of project delays and unanticipated costs, DOE has since added $1.2 million to the project. Included in these expenditures is $596,724 that DOE paid for travel, lodging, and other expenses for 22 Ukrainian technology team members who lived in the United States for approximately one year. (See table III.3.) Of that amount, $26,298 was for airfare that DOE or S-3 Technologies, Inc. paid for 12 Ukrainian spouses who traveled to the United States and 4 team members who traveled home, for about one month. According to a representative of S-3 Technologies, Inc., one member of the project team, a Ukrainian computer specialist, left the program to work for a U.S. computer firm and was replaced. PNNL officials, who authorized the payments, were not aware that the Ukrainian team member had left the project under these circumstances until we brought it to their attention. They noted, however, that they were led to believe that the team member had personal problems that caused his absence from the program. Furthermore, these officials stated that no inappropriate payments were made to the team member. The simulator project was originally expected to be completed by December 1996, and, according to DOE and Brookhaven project officials, is now expected to be completed by November 1997. Several events have contributed to project delays, including (1) a 9-month delay in receiving Department of Defense funds, (2) the reluctance of Khmelnytskyy plant management to provide complete and accurate plant data in a timely manner, (3) the unanticipated modification of Ukrainian-supplied control room panels, (4) the need to replace defective simulator instruments supplied by Ukraine, and (5) customs problems. The Ukrainians have had difficulties in fulfilling their agreements in a timely manner. This has led to project delays and unanticipated costs. An S-3 Technologies representative told us that the Khmelnytskyy plant management was hesitant to release plant data to his company and had asked for payment for the data collection effort. S-3 Technologies eventually received the plant data without paying for it. Furthermore, the plant did not provide complete and timely delivery of the control room panels and instruments that are required for assembling the simulator. DOE paid $389,000 to modify the control panels. In addition, because the crates containing the instruments and switches had been stored outside for 3 years, the switches were rusted and needed to be replaced, at a cost of about $52,600 to DOE. DOE has also agreed to pay about $30,000 for lodging costs for the U.S. contractor personnel who will be living in Ukraine for one year. Ukraine had originally agreed to cover these costs as well. Difficulties with customs have also contributed to delays in assembling the control room panels for the simulator. In a 1995 letter to DOE, an official with Ukraine’s state-owned nuclear power utility said that clearance of certain equipment had been held up because representatives of the U.S. Agency for International Development(USAID) mission in Kiev were unaware that the deliveries were part of DOE’s technical assistance program. In February 1996, the Khmelnytskyy plant’s general director noted in a letter to Brookhaven National Laboratory that the simulator project would be jeopardized if the customs problems were not resolved. In February 1996, DOE signed a protocol with Goscomatom, Ukraine’s nuclear power utility, to facilitate future deliveries of nuclear safety equipment from the United States to Ukraine. In January 1996, two shipments of simulator parts were delivered to a customs warehouse in Ukraine, but they were not released until U.S. officials visited the plant in April 1996. During this visit, DOE’s project manager told us that he agreed to have S-3 Technologies pay customs fees of between $2,000 and $3,000 to release a third shipment of simulator equipment. DOE and Brookhaven officials and representatives of S-3 Technologies are concerned that the training center building that will house the simulator will not be completed on time. During U.S. officials’ July 1996 visit to the plant, Pacific Northwest National Laboratory representatives reported that the training center building would be completed in August 1996. The control panels are also expected to be completed and installed in August 1996. About 16 to 18 S-3 Technology personnel will travel to the Khmelnytskyy plant in the fall of 1996 to integrate the simulator hardware with the software and to test and verify the simulator. The Khmelnytskyy team members will assist the U.S. contractors in testing the simulator in Ukraine. In addition, members of the technology transfer team representing Goscomatom will work at the Ukraine simulator support center in Kiev developing other simulators for Ukraine, such as one for the Rivne nuclear power plant. DOE is providing batteries and related equipment to improve the emergency power supply systems of the Kola and Kursk nuclear power plants in Russia. The batteries are used to provide electricity so that safety systems can function during a power outage, and the performance of most nuclear power plant safety systems depends on the availability of emergency power. The equipment will replace existing Russian-manufactured batteries that are not enclosed and present potential safety hazards. In the case of Kola, the new batteries are seismically qualified and meet U.S. safety standards. At Kursk, batteries will be of two types: seismically qualified and commercial. In both projects, the U.S. contractor, Burns and Roe, is responsible for developing specifications, purchasing and delivering the equipment, and monitoring equipment installation. The major pieces of equipment are being purchased in the United States. Figures III.2 and III.3 show existing batteries and U.S.- provided replacements. DOE has allocated approximately $3.5 million for the Kola project and about $2.1 million for Kursk. As of March 31, 1996, DOE had obligated $3 million and spent about $2.9 million for Kola and had obligated $2.1 million and spent about $607,000 for Kursk. Tables III.4 and III.5 provide an analysis of expenditures for the two projects. DOE’s project manager told us that both the Kola and Kursk battery upgrade projects were initially delayed for about one year because of Brookhaven’s and U.S. contractors’ concerns about nuclear liability. These concerns resulted in the transfer of overall program management from Brookhaven to Pacific Northwest National Laboratory. The Kola project, started in 1994, was originally expected to be completed by July 1995. DOE currently estimates that the project should be completed by September 1996. After the initial delay, DOE has proceeded according to schedule and the manufacture of the batteries is to be completed by July 1996. Installation will be completed when the reactor goes off line temporarily for routine maintenance. A representative from the U.S. contractor, Burns and Roe, noted that delays in the Kola project were also due to changes when the plant management decided to perform routine maintenance that forced the contractor to reschedule battery installation. The Kola project, like other projects we reviewed, also experienced customs problems, but these problems did not affect the project schedule, which had already been delayed. For example, in November 1995, equipment for the Kola project was shipped from the United States to Russia. As instructed by DOE, Burns and Roe held the equipment at a storage warehouse in the Netherlands pending resolution of customs issues. The equipment remained at the warehouse from November 1995 through early January 1996, when it was cleared for release and shipped to the plant. The warehouse charged the U.S. shipping agent $11,292 for storage fees, and DOE will be asked to reimburse the shipper for the fees. Despite these customs problems, according to the Burns and Roe representative, all of the Kola-related equipment has been delivered and the batteries for one of the units have been installed. This includes approximately $1.6 million for batteries, battery trays, and switchboards. Approximately $100,000 in similar equipment has been delivered to Kursk. The remainder of the equipment is scheduled for shipment in August 1996. DOE estimates that the Kursk project will be completed in December 1996. DOE is providing fire safety equipment to the Smolensk nuclear power plant in Russia to reduce the risk of fires and minimize their effect should they occur. Smolensk was selected as a pilot project by DOE in 1993, with the expectation that equipment would be delivered quickly to demonstrate the effectiveness of the U.S. safety program. In 1993, Brookhaven, which was initially assigned responsibility for project management by DOE, selected Bechtel Power Corporation to implement the project. Major items of equipment to be provided include fire suits, smoke detectors, fire doors, fireproofing materials, and radios. Figure III.4 shows an existing fire door at Smolensk and a sample replacement door is shown in figure III.5. Fire-fighting suits, already delivered to the plant, are shown in figure III.6. The estimated value of the project is $4.5 million. As of March 31, 1996, $2.7 million had been obligated, and about $1.3 million had been spent, as shown in table III.6. Numerous factors have delayed implementation of the Smolensk fire safety project, including (1) nuclear liability concerns that led to changes in U.S. project management, (2) customs problems, and (3) the lack of cooperation from key Russian organizations. According to a Bechtel representative, nuclear liability concerns—and the resulting change in DOE’s project management—contributed to an 8-month delay because additional contracts were not awarded to continue the work. As a result of these problems, the project has missed performance-related milestones. The project, which began in August 1993, was initially expected to be completed by December 1994. DOE now estimates that the project should be completed by November 1996. A Smolensk plant official told us that he appreciated the U.S. assistance but was disappointed that the fire safety equipment has not been delivered sooner. He said that the Russian bureaucracy has created many difficulties for the plant and the equipment is urgently needed. With respect to customs problems, in October 1994, a shipment of fire safety equipment, including 100 fire suits and fire hose nozzles valued at about $110,000, was impounded by Russian customs officials pending payment of the applicable customs taxes. When the Smolensk plant refused to pay, the customs office brought the matter before a local court. The customs office noted that as a private enterprise, it was responsible for generating income. The court agreed and upheld the assessment. Because no duty was paid, the court subsequently declared the items abandoned and the property of the state. In January 1995, the court released the shipment to the fire department of a regional capital located approximately 100 miles from the nuclear power plant. After discovering that the fire suits had the name of the power plant’s fire safety brigade labeled across the back in English, the city fire department notified the power plant. In September 1995, almost one year after the shipment had arrived at Smolensk, the equipment was sent to the nuclear power plant’s fire brigade. Because of the uncertainties associated with the customs clearance process, DOE decided to hold a $23,650 shipment of fire protection equipment destined for Smolensk in Helsinki, Finland. This equipment, primarily fire-retardant material, was stored at Helsinki in March 1995 and delivered to the plant in early January 1996. According to the Pacific Northwest National Laboratory, the shipper has reported storage costs of about $23,000, and the Laboratory is reviewing the reasonableness of this claim. In another instance, approximately $5,000 in fire safety equipment—including one sample fire door—was lost on its way to the Smolensk nuclear power plant. Also included in the missing items were power transformers, cable wire, and a power distribution panel for a fire detection system. According to DOE, the items were fully insured and were replaced with no operational or financial impact on the project. DOE and Pacific Northwest National Laboratory representatives and Bechtel officials also cited difficulty in working with Rosenergoatom to resolve difficulties in obtaining Russian certification for U.S.-supplied equipment. In April 1994, Rosenergoatom informed Bechtel that a considerable portion of U.S.-supplied equipment would have to be tested in Russia before it could be installed. The equipment identified for testing included fire brigade clothing, floor coating material, and fire doors. In April 1994, Rosenergoatom officials informed U.S. officials that none of the sample equipment previously left at the plant had been officially approved or certified because of the lack of funding. Rosenergoatom requested that the United States pay for the testing. Brookhaven National Laboratory agreed, but payment was delayed because of nuclear liability issues and the eventual change in DOE’s program management. The Pacific Northwest National Laboratory subsequently agreed to pay about $60,000 to Rosenergoatom for testing, but the contract was never signed because Rosenergoatom and the testing facility were unable to agree on how to share the anticipated U.S funds. During this period, Rosenergoatom refused to allow the U.S. contractor to visit the plant. In December 1995, after an absence of more than 2 years, Bechtel representatives were allowed to visit. According to DOE officials, Rosenergoatom and the plant are now accepting most of the materials without additional testing. Both U.S. and Russian officials from the Smolensk nuclear power plant believed that the fire safety equipment will ultimately have a beneficial effect, even though only a portion of the fire safety equipment has been delivered or installed. A Smolensk plant official told us that the fire suits have already had an impact because the firefighters are less hesitant to perform their jobs because they have improved equipment. He also noted that the plant needs more U.S. equipment than is currently planned in order to make comprehensive safety improvements. DOE is assisting Ukraine in developing on-site spent-fuel storage capacity at the Zaporizhzhya nuclear power plant in Ukraine so that the plant does not have to continue to pay for spent-fuel reprocessing in Russia. According to DOE officials, without adequate spent-fuel storage capacity at Zaporizhzhya, Ukraine might be forced to shut down some of its plants and continue to operate Chornobyl to compensate for lost power production. A key objective of the project is to transfer technology so that Ukraine can eventually manufacture the entire storage unit. This unit consists primarily of a concrete outer shell, a steel inner cask liner, and an inner basket, which holds the spent-fuel rods. For the DOE-funded portion of the project, U.S. vendors are supplying most of the metal materials and manufacturing major internal cask components for three storage cask (spent-fuel) systems. The components will be shipped to Ukraine for final assembly. Ukraine will be responsible for constructing the three concrete outer shells that will hold the spent fuel and will perform welding tasks. Ukraine is also providing certain materials and hardware. The goal is to provide Ukraine with the capability to manufacture the entire unit. DOE officials believe that once the technology is fully transferred, Ukraine will be able to manufacture about 12 casks per year in order to eventually allow the plant to become self-sufficient in managing spent fuel. Additionally, DOE officials believe that, if successful, Ukraine could export the casks. While DOE officials noted that the project is not directly related to improving the safety of Soviet-designed reactors, it nonetheless is a high-priority program for Ukraine and the United States. DOE believes that the technology transfer element of this project is essential to the program’s success. Additionally, the role of the Ukrainian regulatory body in monitoring the design and construction of the casks is central to DOE’s emphasis on including the regulator in all aspects of nuclear safety. While these benefits may be achieved, Ukraine’s limited resources raise questions about the number of casks it can ultimately manufacture. According to the contractor, hundreds of casks will be required to have a significant impact on improving capacity for on-site waste storage. In December 1993, Ukraine’s nuclear utility entered into an agreement with Duke Engineering and Services to supply Zaporizhzhya with 14 spent-fuel storage units valued at approximately $14 million. Subsequently, Ukraine requested that DOE help fund the project. DOE agreed to fund $6.6 million of the total cost for the production of the first three casks. As of March 31, 1996, DOE had obligated $6.5 million and had spent about $5 million on the project. Table III.7 shows the expenditures in greater detail. The overall pace of the project has been slow, although training has been provided to the Ukrainian regulator and plant staff and some equipment has been purchased and delivered. The project has faced significant impediments and is more than 6 months behind schedule; it was initially set for completion by the end of March 1996. DOE estimates that the project will now be completed after September 1996. At the time of our review, none of the casks had been built. The primary reasons for the delay are (1) a series of unanticipated design changes required by the Zaporizhzhya power plant and (2) greater-than-anticipated time to obtain a Ukrainian construction license. A U.S. contractor’s representative told us that the plant’s management has changed, causing interruptions in program continuity and responsibility for decision making. In his view, Ukraine’s lack of experience in this type of technology—coupled with the desire to demonstrate independence—has contributed to difficulties with project implementation. For example, officials at a Ukraine design institute would not initially approve the use of a standard U.S. welding technique because they were unfamiliar with the process, delaying the manufacture of part of the cask for about 5 months. Ukraine’s nuclear regulatory body, which is responsible for approving the project design and construction, has been slow to issue a construction license. Pacific Northwest National Laboratory officials and a representative of Duke Engineering and Services noted that the regulator has failed to respond in a timely fashion to a safety analysis report that is needed before construction can begin. These officials noted, however, that the delay is somewhat understandable because this is the first time that the regulator has been requested to license such a system in Ukraine. A Duke Engineering and Services representative said that customs problems did not delay project implementation. The project was already behind schedule because of problems with the Ukrainian regulator and plant management. However, he noted that several contractor personnel have spent time trying to resolve the customs issues, taking time away from other responsibilities. For example, some equipment including a $400,000 cask transporter, was impounded by Ukraine customs officials. The transporter and some ancillary equipment were delivered to the plant in early January 1996 but was not cleared for release to the plant until mid-April 1996. DOE is also providing fire safety equipment to the Zaporizhzhya nuclear power plant in Ukraine. DOE selected Zaporizhzhya—along with Smolensk—to be a pilot plant for the program to upgrade fire safety, expecting that this project would be implemented quickly. Burns and Roe is the primary U.S. contractor for the Zaporizhzhya project. DOE is purchasing fire protection suits, fire hose nozzles, smoke detectors, fire-proofing materials, fire alarms, and fire doors, and is assisting in installing the equipment. A key DOE objective is to transfer technology so that Ukraine can manufacture fire doors. As of March 31, 1996, DOE had obligated $1.8 million and had spent about $1.7 million for the project. Table III.8 shows these expenditures in greater detail. The project did not meet its initial expected completion date of December 1994. DOE currently estimates that the project will be completed in December 1996. According to Burns and Roe, the U.S. contractor, the delays have been caused primarily by (1) the plant management’s changes in work scope, (2) liability concerns, and (3) the inability of the Ukrainian company responsible for manufacturing the fire doors to meet milestones. Originally, Burns and Roe had planned to provide significant amounts of fireproofing materials for the reactor’s walls. However, the plant’s management subsequently requested that the materials be used for the floors, which required extensive reconfiguration of material requirements and rebidding of contracts. DOE’s project manager estimated that nuclear liability problems—and the resulting change in project management from Brookhaven to DOE—created a delay of 1 year. A Burns and Roe representative said the acquisition of Ukrainian-manufactured fire doors has delayed the project by about 7 months. As part of its program to transfer technology to support recipient countries’ infrastructure, DOE identified a Ukrainian company to produce fire doors for Zaporizhzhya. This company is expected to manufacture 122 fire doors that will be installed at one unit of the plant. Initial efforts to produce the doors were delayed because prototype doors manufactured by the company failed certifications tests. According to the U.S. contractor, the Ukrainian company then took about 5 months to redesign the doors. The doors passed inspection in June 1995. This company will be paid about $70,000 to produce the doors. At the time of our review, the production of the doors had recently begun. DOE estimates that all of the doors will be installed by September 1996. Overall, the project now appears to be progressing more smoothly. According to a Burns and Roe representative, the project is approximately 90-percent complete. According to DOE officials, with the exception of the Ukrainian fire doors, most of the equipment has been delivered, including 50 fire suits, 1,242 sprinkler heads, 160 smoke detectors, and fireproof sealant material. Other equipment, including fire extinguishers and face masks, will be provided by Ukrainian vendors. DOE and Pacific Northwest National Laboratory officials, as well as a Burns and Roe representative, believe that the project has met most of its safety-related objectives. In their view, the one operating unit at Zaporizhzhya that is receiving fire safety equipment is now more capable of reducing the incidence of fire and has an increased capacity to mitigate a fire’s consequences. DOE officials are confident that the Ukrainian company will continue to manufacture fire doors for Soviet-designed reactors, at least in the near-term. For example, the Ukrainian company was negotiating a contract with the Laboratory to provide between 300 and 500 fire doors for Chornobyl. In addition, these officials believe that the company will provide more doors to Zaporizhzhya over the next few years. They expressed some concern, however, about whether funds would be sufficient to provide fire safety upgrades at more than one unit of the plant. DOE is assisting plant operators in developing symptom-based emergency operating instructions for Soviet-designed reactors. In the event of an emergency, symptom-based operating instructions are designed to (1) specify operator actions in response to changing plant conditions, (2) allow the operator to stabilize the reactor without having to first determine the cause for the changing reactor conditions, and (3) contribute to faster decision making. DOE and national laboratory officials believe that the development and implementation of these instructions is one of the more significant components of the U.S. program to provide nuclear safety assistance. In their view, these instructions focus on the human element of safety and will contribute to a self-sustaining safety culture. DOE is also assisting in the development of operator training for the instructions. In addition, DOE is assisting in the development of operational control procedures. The Institute of Nuclear Power Operations (INPO) has been responsible for transferring the U.S. methodology for developing and implementing the symptom-based emergency operating instructions. INPO was established by the U.S. nuclear industry in 1979 following the Three Mile Island Accident. Its purpose was to enhance the safety and reliability of commercial nuclear power plants. INPO has developed a series of operational procedures and guidelines that have been adopted by power plant operators throughout the United States and is considered a leader in the field of operational safety in general and in the field of symptom-based emergency operating instructions in particular. Since February 1991, DOE has awarded four sole-source contracts to INPO for $13.5 million to transfer documents and expertise to help develop emergency operating instructions for the Newly Independent States. The contract prices were based on a fee system used for membership in INPO. Under the terms of these fixed-price contracts, DOE accepted the fact that INPO’s accounting system did not meet government cost accounting standards and did not break out costs by such categories as labor, travel, and overhead. In addition to the INPO contracts, DOE has spent about $1.8 million to support the development of the instructions. (See table III.9.) As part of its overall support for the project, DOE is providing funds directly to several Russian nuclear power plants. As of March 1996, the Pacific Northwest National Laboratory had awarded contracts totaling $1.1 million to nine nuclear power plants that are developing the instructions in Russia, Ukraine, Bulgaria, and Lithuania. DOE’s project manager said the funds are needed to accelerate the pace of the program. The development and implementation of the emergency operating instructions has faced considerable impediments and delays. The experience at the Novovoronezh nuclear power plant is a case in point. In 1992, DOE’s former Assistant Secretary for Nuclear Energy said in congressional testimony that by mid-1993, 35 emergency operating instructions were to be implemented at Novovoronezh. By March 1996, only 22 of the instructions had been approved for implementation at one of the plant’s operating units. DOE and INPO officials noted that while the plant had drafted all of the procedures in 1992, numerous factors had delayed approval and implementation. Russian organizations that are responsible for approving the procedures have been slow to act and have not given the project priority status. A Pacific Northwest National Laboratory official told us that the Russian regulator and the VVER design institute were not included in the early part of the project. While this approach was dictated by the Ministry of Energy in the former Soviet Union, it became increasingly apparent that this approach did not ensure adequate coordination. A DOE official said that it was difficult to obtain a consistent story about which Russian organization was responsible for the delays. In August 1995, a Rosenergoatom official told a DOE official that the emergency operating instructions, in general, had been “headaches and a drain on resources.” In a March 1995 meeting between DOE and Minatom officials, DOE noted that it was very difficult to defend the U.S. nuclear safety assistance program when the Novovoronezh instructions were taking so long to be approved. DOE officials responsible for the project noted that although coordination among the key Russian organizations has improved over the years, it still could be better. DOE and Pacific Northwest National Laboratory officials noted that Russian officials have grown more supportive of the project. For example, the same Rosenergoatom official who had earlier criticized the project has made supportive statements since that time, according to DOE. In addition, in late September 1995, senior managers of key Russian organizations told a DOE representative that the project was a priority and that project management would be improved. Although disappointed by the pace of the instructions’ development, U.S. officials believe that progress is being made because Russia and Ukraine are displaying a greater commitment to implementing the instructions. For example, Goscomatom decreed that all reactor sites must develop emergency operating instructions by December 1996. In January 1996, Rosenergoatom directed all Russian plants to implement the instructions. Nevertheless, impediments remain for the project. For example, according to INPO, the development of the VVER instructions had stalled because the design institute had not provided engineering analysis. Furthermore, the design institute was generally unwilling to perform the analysis without compensation, which the nuclear power plants are unable to provide without funding assistance from the United States. Table III.10 shows the status of the emergency operating instructions by reactor type as of March 1996. Planned drafting completed (Month/year) Planned implementation date (Month/year) Partial approval on 3/1996. Additional 10 planned for 6/1997 The needed analysis has been identified. However the performing organization has not received any funds to perform the necessary analysis. Awaiting receipt of final documents from the nuclear power plants. U.S. officials cited a case in which operators’ exposure to the process of developing the instructions provided them with the knowledge to respond more effectively and efficiently to emergency conditions. In response to leakage of cooling water from a Ukrainian reactor, operators took specific actions to control the water leakage and prevented the reactors from overheating. In December 1993, the U.S. Secretary of Energy and the Russian Minister of Fuels and Energy agreed to conduct a joint study to examine options for electric power in Russia, as recommended by the Gore-Chernomyrdin Commission. This study was (1) intended to build on a 1993 study on electricity options for Russia prepared for the G-7 by the World Bank, the International Energy Agency, and the Russian government and (2) expected to provide a framework for investment by international financial institutions in Russia’s electricity sector. U.S.-Russian working groups were established on energy efficiency, thermal power plants, nuclear power, hydroelectric power, and a joint steering committee with representation from the Department of State, USAID, DOE and NRC. Because of Minatom’s initial resistance to participating in the joint study, DOE agreed to produce a second, parallel study that focused exclusively on the nuclear sector in Russia. DOE expected that its study would provide Russia with a cost-based analysis of nuclear energy options. The options evaluated were (1) enhancing the safety of operating plants, (2) closure and decommissioning of operating plants, (3) conversion of a partially built power plant to gas or coal, (4) completing a partially built plant, and (5) building a new generation of plants. The conclusions and findings of the nuclear study were integrated into the broader study. DOE spent about $2 million to prepare the Joint Parallel Nuclear Alternatives Study for Russia. The study was funded under DOE’s international nuclear safety program. (See table III.11 for greater detail.) The broader study, the Joint Electric Power Alternatives Study, prepared by USAID and its contractors, cost about $8 million. DOE’s nuclear study was completed in May 1995 and the broader electric power study was presented at a meeting of the Gore-Chernomyrdin Commission in June 1995. Both studies were intended to be completed on a fast track in order to be available for the July 1994 G-7 meeting, but completion of the studies was delayed for about 1 year. This delay increased DOE’s project costs by about $1 million. Department of State and Brookhaven officials told us that project delays were due to Russian difficulties in developing the data and models, and disagreements over cost assumptions. In addition, a large number of Russian energy ministries, institutes, and organizations jointly prepared the report with the United States, and it took more time than anticipated to get their agreement on the report. Although there were initial difficulties in gaining the cooperation of Minatom, a Department of State official believes that one of the long-term benefits of the DOE-funded study is that Minatom worked cooperatively with other Russian energy ministries and organizations. The study found that nuclear power was cost- competitive with other sources of electricity in Russia and concluded that it was in Russia’s economic interest to upgrade some plants and to close four to six of its older plants. The study also recommended that Russia develop a decommissioning program for a specific RBMK type of reactor. DOE and Brookhaven are now working with Minatom, the Kurchatov Institute in Moscow, and the Leningrad nuclear power plant in Russia to initiate a decommissioning study for Unit 1 of the plant. This appendix discusses four nuclear safety projects that NRC is implementing in Russia and Ukraine. These projects, with a total value of about $10.5 million, focus on (1) providing analytical simulators for Russia and Ukraine, (2) developing an emergency response center in Russia, (3) helping develop legal authority for Russia’s nuclear regulatory body, and (4) supporting efforts to perform a probalistic risk assessment at a Russian nuclear power plant. NRC plans to provide four analytical simulators for the nuclear regulatory bodies in Russia and Ukraine to use for training purposes. Simulators are planned for Ukraine’s regulatory center in Kiev and Russian regional and headquarters’ sites. The regulators will receive software needed to model VVER-1000 plants at Zaporizhzhya in Ukraine and Balakovo in Russia, a VVER-440/213 plant at Rivne in Ukraine, and an RBMK plant at Kursk in Russia. In addition, the regulators will be trained to perform software modifications so that nuclear power plants at Chornobyl and Kola can also be simulated. The analytical simulators will enable the regulators to familiarize themselves with plant operations. NRC believes that the regulators’ ability to monitor plant safety will be improved significantly by providing dedicated training simulators. Currently, only a handful of regulators obtain a few hours of training on existing plant simulators in Russia and Ukraine. When fully implemented, the regulators are expected to have an integrated system of computer hardware and software and to train designated personnel in the use and maintenance of the analytical simulators. NRC has allocated about $1.5 million for Russia and $2 million for Ukraine. As of March 31, 1996, NRC had obligated and expended $12,839 of this amount on personnel travel but had not yet spent any of its funds for Ukraine. Equipment and other deliverables have not yet been provided to Russia and Ukraine because of procurement problems at NRC. Table IV.1 shows the expenditures for this project in greater detail. The analytical simulator project was originally planned for completion by December 1996. NRC now projects that it will be completed about mid-1999. Project implementation has been slow primarily because of procurement delays. NRC has twice requested proposals for the analytical simulators in an attempt to obtain what it views as a reasonable price for the work envisioned. In May 1995, NRC requested proposals for simulators to be supplied to Russia. NRC considered the one proposal it received to be unreasonably high when compared with the government’s estimate of about $3 million. In December 1995, NRC solicited a proposal for both Russia and Ukraine and awarded a contract in June 1996 for $2.6 million. NRC plans to complete the project in several phases. The first phase, which started in December 1993 and is to be completed around December 1997, involves training personnel and delivery of simulator hardware to Russia and Ukraine. The second phase, which is projected to start in January 1997 and end in August 1998, involves the delivery of additional hardware to Russia. The third phase, which NRC plans to begin in September 1997 and complete by mid-1999, focuses on additional training and improved capabilities for the simulators. NRC is assisting Russia’s regulatory organization, Gosatomnadzor (GAN), in establishing a basic communications system and essential support capabilities for responding to emergencies at nuclear power plants. NRC’s effort is intended to reduce the severity of an emergency, should one occur, by reducing the incidence of radiological exposure to the public and the environment. The project provides for the purchase, installation, and in-place testing of prototype equipment at three locations—GAN Headquarters in Moscow, the Leningrad nuclear power plant near St. Petersburg, and the Kalinin nuclear power plant located between these two cities. After the prototype phase is completed, NRC plans to provide equipment to 11 other nuclear power plants and regional regulatory offices in Russia. Figure IV.1 shows the emergency response center in Moscow. NRC plans to complete a fully functional emergency support center in Moscow with communication links to each Russian nuclear power plant. NRC is taking a phased approach because of, among other things, operating uncertainties in Russia. NRC has contracted with Science Applications International Corporation (SAIC) for almost all of the work on the project, in accordance with plans approved by both NRC and GAN. NRC expects that a minimum amount of assembled equipment will be tested and operated in Russia during the prototype phase. The purpose is to determine if the equipment is fully suitable before making a U.S. investment in the entire Russian response system. NRC officials believe that once the transmission links are fully functional, the project will begin to show tangible, measurable progress. By improving communications among the plants and GAN, NRC believe the regulatory body’s role will be enhanced significantly because it will play a major role in coordinating activities in case of a nuclear power plant emergency. NRC has budgeted $1.5 million for the project. As of March 31, 1996, NRC had obligated about $1.3 million and had spent $524,738 for the project. Table IV.2 shows how the funds have been spent. The project, started in October 1992, was originally expected to be completed by February 1996 but is now projected to be completed by September 1997. NRC has provided some prototype equipment but has not yet expanded the project to provide basic communications and support equipment to all of the Russian nuclear power plants and to GAN headquarters. The prototype equipment associated with the project includes three compatible computers with fax modems and software, a dot-matrix printer, a facsimile machine, and three high-frequency radio base stations with fax modems. NRC officials and SAIC representatives said the prototype communications system, which is partially functioning, faced several impediments that delayed operations. For example, permits obtained from Russia’s Ministry of Communications to test radio communications were temporary and good for initial tests only. Under the prototype phase, the initial results from testing were poor but because the temporary permits had expired, further testing had to await better, permanent frequency assignments and permits by the Ministry. NRC finished the prototype work in the spring of 1996, and not by January 1995, as originally estimated. In addition, project-related equipment has not always been delivered in a timely manner because of customs problems. For example, three antennas costing about $2,000 were held by Russian customs for several weeks in mid-1995. The U.S. contractor had shipped the antennas, but they were impounded by customs officials, who demanded payment of import duties to release them. NRC brought this matter to the attention of a GAN official to get the equipment released. In another case, 10 modems and related cables that GAN officials had hand-carried to Russia were impounded by customs officials. As a result of that customs action, other shipments were halted and the installation of the equipment was put on hold. The matter was resolved in September 1995, but a SAIC representative told us that he has spent large amounts of time assisting GAN with customs problems. NRC is helping GAN to (1) develop a legal framework and create a system of enforcement and economic sanctions and (2) improve its ability to license civilian nuclear power plants and other civilian nuclear installations. NRC and GAN consider these objectives to be among the highest priority. Without a legal foundation for its operations—and the ability to impose fines for improper operations—GAN’s long-term effectiveness and viability remain questionable. GAN’s First Deputy Chairman told us that without the appropriate legislative backing, his organization will not be able to function effectively within the Russian nuclear bureaucracy. For example, the Russian official noted that although GAN can impose fines on nuclear installations, the fines are of little value. In the fall of 1994, NRC officials provided comments to GAN on the draft Russian law pertaining to the use of nuclear energy. The comments primarily related to the need to clarify GAN’s regulatory independence. According to NRC officials, NRC is not attempting to impose its own regulatory system on GAN. Rather, it seeks to work collaboratively and tailor its support to meet the needs of GAN. As a result, GAN has acquired detailed information about NRC regulatory practices and legal responsibilities through this project. NRC had allocated $577,934 for the project as follows: $502,934 for licensing activities and $75,000 for legislative and enforcement initiatives. As of March 31, 1996, NRC had obligated $479,773 and had expended $328,638 for licensing activities, and had obligated and spent $39,010 for legislative initiatives. Table IV.3 shows these expenditures in greater detail. Both the legislative and licensing initiatives have been delayed. The licensing project was originally scheduled to be completed in mid-1995 and is now scheduled for completion in mid-1997. The legislative initiative, which began in May 1994, was originally expected to be completed by December 1995. NRC now anticipates that the initiative will conclude by December 1996. NRC officials cited a number of factors that have contributed to schedule slippages, including (1) changes in the scope of the project, (2) longer-than-anticipated time to prepare and translate critical documents, and (3) delays in Russian completion of legislation and drafting of an enforcement policy. NRC officials emphasized that the completion of the project depends to a great degree on the maturation of GAN and its acceptance within the Russian bureaucracy. Despite these delays, NRC officials view the progress made under these initiatives as an important first step toward increased regulatory independence and effectiveness. In November 1995, Russian President Yeltsin signed legislation that established, in part, a legal framework for the regulation of nuclear safety. NRC assisted GAN in drafting this legislation. Both NRC and GAN officials noted that the law is a significant step toward “legitimizing” GAN. GAN has also been designated as the lead agency to develop a supplemental law about the roles and responsibilities of the regulatory organization in Russia. NRC officials also believe that the licensing initiative is now moving ahead. For example, GAN submitted a document to NRC that addresses the process for submitting and approving licenses for various nuclear installations. NRC is supporting Russia’s efforts to involve six Russian organizations, including GAN, to perform a Probalistic Risk Assessment for a VVER-1000 reactor at unit 1 of the Kalinin nuclear power plant. A Probalistic Risk Assessment is used to evaluate the potential for significant accidents occurring at a plant during different power operations. NRC has entered into agreements with six Russian organizations, including the plant designer, plant operator, the utility, and the regulator, to facilitate the assessment. GAN is responsible for coordinating and managing the project with the various Russian organizations participating in the development of the risk assessment. NRC believes that by performing the risk assessment, GAN and the other Russian participants will (1) obtain Probalistic Risk Assessment training and develop expertise to perform and/or evaluate risk assessments conducted for other nuclear power plants, (2) achieve an improved understanding of the value of risk assessments and their uses for improving safety, and (3) increase its stature. Additionally, NRC anticipates that as the Russian organizations collaborate on the project, they will become more open and willing to cooperate among themselves in conducting risk assessments on other plants in Russia. NRC plans to spend $2.5 million for the project. As of March 31, 1996, NRC had obligated $1.4 million and had spent about $1.1 million. Table IV.4 provides detailed information on these expenditures. The project began in mid-1994 but initially experienced difficulty getting under way. According to NRC officials, a number of factors contributed to delays. NRC had some difficulty obtaining agreement among the various Russian participants about their respective roles, responsibilities, and the extent to which a final report would be distributed to other VVER-1000 plants. NRC and GAN signed a memorandum of understanding for the project in December 1994, but the final implementing agreements for all the other Russian participants were not approved until August 1995. Development of the project guidelines was also delayed. To date, the project has focused primarily on defining the scope of the project, developing specific procedure guides for each project task, and formalizing the amount and type of training needed. In March 1996, NRC held a 2-month risk assessment workshop at which Russian technical staff representing the participating organizations met with NRC and U.S. experts. The purpose of the workshop was to begin the practical integration of the organizations and to focus on specific probalistic risk assessment tasks. NRC is attempting to structure the training so that the Russian organizations will be able to perform the probalistic risk assessments on their own with periodic NRC assistance and oversight. As part of this process, the Kalinin nuclear power plant staff is expected to provide specific information on plant design, operating history, and operating procedures. To identify the goals and objectives of the U.S. nuclear safety assistance program, we interviewed and obtained pertinent documents from officials at the Department of State, USAID, DOE, and NRC. We also met with officials at the Brookhaven National Laboratory in Upton, New York, and at the Pacific Northwest National Laboratory in Richland, Washington. We also met with representatives of the Nuclear Energy Institute and the Natural Resources Defense Council in Washington, D.C., to obtain their views about the priorities, objectives, and implementation of the U.S. program. To provide information on the amount and type of U.S. assistance being planned or provided, we obtained cost and program funding data from U.S. government agencies that provided the assistance. Specifically, we obtained these data from DOE, the Brookhaven and Pacific Northwest National Laboratories, and NRC. We did not independently verify the accuracy of the data they provided. To determine how the U.S. safety assistance program was being implemented, we judgmentally selected 13 DOE and NRC safety projects to review. These projects are valued at about $67 million. We limited our selection of projects to Russia and Ukraine because those countries are the primary recipients of U.S. assistance to improve the safety of Soviet-designed reactors. We based our selection on a number of factors: (1) the maturity of the project; (2) dollar value; and (3) diversity—equipment-related projects, training-related projects, legislative initiatives, and a study of nuclear energy options for Russia. We discussed our selection of projects with DOE and NRC officials. DOE officials requested that we add some additional training and equipment projects to our sample, which we did. NRC officials said the projects we chose represented a fair sample of the type of assistance NRC is providing. To assess the status of the selected projects and how they are improving safety, we met with appropriate DOE, NRC, and national laboratory officials. We also met with U.S. contractor representatives responsible for implementing the projects for DOE and NRC. Specifically, we met with officials from the following U.S. firms: Burns and Roe Company (Oradell, New Jersey); Science Applications International Corporation (Germantown, Maryland); S-3 Technologies, Inc. (Columbia, Maryland); Duke Engineering and Services (Charlotte, North Carolina); Sonalysts, Inc. (Waterford, Connecticut); and Bechtel Power Corporation (Gaithersburg, Maryland). We also met with a representative of the Institute of Nuclear Power Operations (Atlanta, Georgia). We met with officials from Russia and Ukraine to obtain their views on U.S. nuclear safety assistance. Specifically, we met with Russian representatives from the Smolensk nuclear power plant and Russia’s Ministry of Atomic Energy, Minatom. We also observed a week-long safety assistance planning meeting between NRC and GAN officials. We met with several GAN officials, including the First Deputy Chairman, to discuss their views about the U.S. assistance program. We discussed the implementation of the Khmelnytskyy simulator project with several Ukrainian representatives from the plant and from Ukraine’s nuclear utility, Goscomatom. These representatives were part of the technology transfer team temporarily residing in the United States. The following are GAO’s comments on DOE’s letter dated September 17, 1996. 1. DOE disagreed with our position that the U.S. safety assistance program poses a dilemma because it may encourage the continued operation of the same reactors that the United States wants to see closed as soon as possible. DOE said that the equipment it is providing is targeted to specific safety deficiencies or to prevent the failure of critical safety equipment and does not extend the operating life of the Soviet-designed nuclear power plants. We have not asserted that the equipment will extend the life of the Soviet-designed plants but believe that some of this equipment may be used to justify the continued operation of the plants. As we noted in the report, the repair or replacement of any component that the plant relies on would support continued plant operations. In our view, DOE’s RBMK maintenance initiative provides the equipment, training, and transfer of technology that enables plant components to remain in service longer—thereby supporting continued plant operations while improving plant safety. For this reason, we maintain that the U.S. program poses a dilemma for U.S. policymakers. While the United States remains committed to the goal of shutting down the highest-risk plants, the assistance has the potential to keep the plants operating longer than they otherwise might have. The following are GAO’s comments on USAID’s letter dated September 18, 1996. 1. Regarding USAID’s comment that progress has been made in the U.S. assistance program, our report noted that several projects we reviewed are progressing and have, for example, resulted in installing fire safety equipment and other safety-related hardware. However, it is too early to assess the progress these projects have made in safety because only one of the projects we reviewed had been completed. Furthermore, most of the 13 projects we reviewed had been delayed, and progress was slow in many cases. 2. USAID commented that the report gives the impression that no progress is being made toward obtaining the closure of the highest-risk Soviet-designed reactors. Our report cites several instances in which closure commitments have been made but also notes that it will be difficult for the countries to meet specific closure dates because of the slow pace of economic reform and the need for financing to help develop alternative energy sources. We also note, however, that to date no reactors have been closed, and one was recently restarted. Jackie A. Goff, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on: (1) changes in the U.S. nuclear safety assistance program's goals since its inception; (2) the costs associated with the program; (3) the status of 13 safety projects implemented by the Department of Energy (DOE) and the Nuclear Regulatory Commission (NRC); and (4) the way in which the agencies assess the effect of the projects on improving safety. GAO found that: (1) the U.S. nuclear safety assistance program's goals to reduce the risk of accidents and encourage the shutdown of the highest-risk Soviet-designed nuclear power reactors have not changed; (2) despite U.S. efforts to close these reactors, none of the highest-risk reactors have been closed and one in Armenia has been restarted; (3) DOE plans to increase its assistance to RBMK reactors to improve their safety while they continue operations; (4) reasons for not shutting down these reactors include the slow pace of the operating countries' economic reforms, concerns about displaced workers' social and economic well-being, and the need for financing for developing replacement energy sources; (5) DOE believes the nuclear safety assistance program should continue another 10 years and is developing a long-term plan that addresses how additional funds should be spent; (6) as of March 1996, DOE and NRC had received $208 million for their programs and had spent $89 million on nuclear safety equipment and products and other expenditures including program-related labor, travel, and overhead; (7) 11 of the 13 DOE and NRC safety projects reviewed have experienced delays, including untimely equipment deliveries due to customs problems and required equipment testing in Russia; (8) some projects have resulted in the installation of fire safety equipment and other safety-related hardware at nuclear powerplants and the development of safety-related training programs in Ukraine; and (9) it is too early to assess the extent to which these projects have improved nuclear reactor safety, and it is difficult to quantify the impact of the assistance provided. |
FERS consists of three parts: a DB component, a DC component with employer contribution, and Social Security. CSRS consists of two parts: a DB component and a DC component with no employer contribution. The DC component differs between FERS and CSRS. Under FERS, federal agencies automatically contribute an amount equal to 1 percent of salary to the Thrift Savings Plan (TSP) for each covered employee whether or not the employee contributes. In addition, the employer will contribute $1.00 for each $1.00 the employee contributes up to 3 percent of salary; and $.50 for each $1.00 the employee contributes on the next 2 percent of salary, for a maximum total employer contribution of 5 percent. FERS employees may contribute an additional 5 percent of salary with no employer contribution. The DC component in CSRS has no employer contribution; however, employees may contribute up to a total of 5 percent of their salary to the TSP and up to 10 percent of their salary to a separate voluntary contribution account. Except for certain employees who were rehired after December 31, 1983, employees in CSRS are not covered by Social Security through their federal employment. State retirement programs vary widely—not only in their details but also because most jurisdictions have separate programs for special categories of employees, such as law enforcement officers, firefighters, teachers, and elected or judicial officials. Employees who do not fall into one of these categories are usually covered by a retirement system for general employees. In 1911, Massachusetts became the first state to develop a retirement program for general service state employees; and by 1947, every state provided retirement benefits. In addition to Social Security benefits, employers may provide retirement benefits to their employees using two basic types of design components— DB and DC. Under a DB plan, employers bear the full responsibility for providing sufficient funding to guarantee that the benefits promised by the formulas will be paid. A DC plan differs in that there is no guaranteed annual benefit amount at retirement. Therefore, the employee bears the risk of whether the funds available at retirement will provide a desired level of benefits. Because employers may choose to offer a combination of these design components, that is—DB, DC with employer contribution, DC with no employer contribution, and Social Security—retirement programs can differ not only by the benefits they provide, but also by the design components they include. Among the arrangements used by state governments to augment regular employee retirement plans are three DC plans authorized under the Internal Revenue Code at sections 401(k), 403(b), and 457(b). These DC plans are voluntary, supplemental, long-term retirement programs that give employees an opportunity to defer receipt of income until retirement or termination of employment. The key attraction of these plans can be the potential tax savings for employees. Income tax is generally deferred on contributions made to these plans and the associated earnings during an employee’s career. Taxes are due when the individual receives benefits from the plan, usually after retirement. To determine the numbers of state retirement programs open and available to general employees as of July 1, 1998, that included the same design components as FERS and CSRS, and what design components were included in the remaining state retirement programs, we obtained preliminary information from a database developed by the Public Pension Coordinating Council (PPCC) for its report, 1997 Survey of State and Local Government Employee Retirement Systems. We also obtained summary plan documents (e.g., employee handbooks and information brochures) for each of the 50 states and interviewed state retirement officials knowledgeable about the programs. We did not review the underlying statutes that set forth the specific provisions of state retirement plans. We relied on our prior work to identify the design components of FERS and CSRS, which we then used to categorize the state retirement plans. On the basis of our prior work, we defined federal general employees as employees who were covered by FERS and CSRS, excluding those who were covered by special retirement provisions—notably law enforcement officers, firefighters, air traffic controllers, Members of Congress, and congressional staff. We defined state general employees as employees who were not classified as law enforcement officers, firefighters, legislative staff, or elected or judicial officials. We also excluded teachers in those states in which teachers were covered by different retirement plans than the ones that covered general employees. As shown in table 2, we categorized state retirement plans according to the major design components found in FERS and CSRS—specifically DB, DC with employer contribution, DC with no employer contribution, and Social Security. On the basis of the four design components shown above, we classified the states’ design components into six category types as shown in table 3. Our approach was designed to provide the most up-to-date information (as of July 1, 1998) on the current design components and features of state retirement plans; however, it was not designed to provide information on some tiers of multitiered and/or closed plans that states also may be operating. We offset this limitation to some extent by describing the changes that have occurred in the design components of state programs since the programs were established. We also asked retirement officials in each state to confirm the accuracy of the information about their programs used in our report. To describe the changes, if any, that have been made to the design of state retirement programs open and available to state general employees since the programs were established, what design changes states were considering, and the reasons for these changes, we interviewed cognizant state retirement officials. We used this information to determine what design components were previously available to state general employees, as well as those components that would be available if states made the changes they were considering. To provide detailed information on the features of state retirement programs, we also used the database developed by PPCC to obtain specific data regarding eligibility, benefits, and contributions. As a supplement to the database, we reviewed state retirement program summary plan documents and financial reports, and/or we contacted state retirement officials. Plan profiles for each state are presented in appendixes I through VI. The profiles are grouped by type according to their design components and are preceded by summary data we selected to facilitate comparisons with FERS and CSRS. We requested comments on a draft of this report from the Director, Office of Personnel Management (OPM). OPM responded that it had no comments. We did our review from February 1998 to March 1999 in accordance with generally accepted government auditing standards. All of the 50 state retirement programs had two or more of the four design components but few of them had all of the same design components as FERS or CSRS. All states had a DC component, but only nine states contributed to the plan. A defined benefit component was included by 48 states and 43 states included Social Security coverage. A majority of states included three of the design components, but these were not the same components included in either FERS or CSRS. A total of nine states had exactly the same component mix as FERS or CSRS. Of the nine states like FERS or CSRS, three—Minnesota, Missouri, and Oklahoma—had programs similar in design to FERS. These programs included a DB component, a DC component with an employer contribution, and Social Security. The other six states—Colorado, Louisiana, Maine, Massachusetts, Nevada, and Ohio offered retirement programs with the same design components as CSRS to their general employees. That is, these programs included a DB component and a DC component with no employer contribution. The vast majority—35—of the states had retirement programs that included a DB component, a DC component with no employer contribution, and Social Security. The lack of employer contributions distinguished these programs from FERS, and the inclusion of Social Security coverage distinguished them from CSRS. The remaining six states offered retirement programs that included other combinations of the FERS and CSRS design components. For example, Indiana, Tennessee, and Utah had programs with a DB component, a DC component with employer contribution, a DC component with no employer contribution, and Social Security. Michigan and Nebraska had programs that included a DC component with employer contribution, a DC component with no employer contribution, and Social Security. Alaska had a retirement program with a DB component, a DC component with employer contribution, and a DC component with no employer contribution. All states have in some way changed the design components of their retirement programs since the programs were established. For the most part, developments in federal law that might enhance employee benefits prompted these changes—notably, the extension of Social Security eligibility to state employees in the early 1950s and the adoption of tax provisions beginning in the 1970s allowing state employees to contribute on a pretax basis to a DC plan. The vast majority, or 44, of the states provided Social Security coverage after it became available to state employees. Of these states, 43 had provided Social Security coverage by the late 1950s; the last state to add Social Security did so in 1969. Alaska discontinued its Social Security coverage in 1980 after state employees elected to add a DC component with employer contribution to the state’s retirement program. Thus, 43 states currently provide Social Security coverage to their employees. Six states have never added Social Security coverage to their retirement programs. States began establishing deferred compensation plans—voluntary DC plans—to augment their retirement programs in 1972. By 1978, when Congress passed Public Law 95-600, which created a statutory basis for Section 457 of the Internal Revenue Code, almost half of the states had a voluntary DC plan. By 1988, all states had voluntary DC plans available to state employees. Although most state employers did not contribute to these plans, as of July 1998, five states did provide an employer contribution. Table 4 shows the design component changes states have made since their retirement programs were established. According to state retirement officials, design component changes under discussion by one or more states during their last legislative sessions included (1) discontinuing the DB plan and establishing a DC plan, (2) adding a DC component to the DB plan, (3) adding an employer contribution to the employee’s voluntary DC plan, and/or (4) adding Social Security coverage. Of the 48 state retirement programs with a DB component, officials representing 21 states told us that they had considered dropping their DB component in favor of a program consisting of a DC component with an employer contribution and Social Security. As shown in table 5, reducing government costs, enhancing portability, and/or lobbying by special interests or group(s) were among the reasons cited for considering such a change. However, no such change was made by these states during their last legislative sessions. Only one state had made this change in recent years. In 1997, Michigan modified its retirement program in this way, which Michigan officials said was done largely as a means of reducing government cost. The most common reasons officials cited for their states not making the change to a DC plan during their past legislative sessions were that (1) studies done by the states showed there was no need to change, (2) further study was needed, (3) the state’s labor unions opposed the change, and/or (4) there was a lack of interest or support. Table 6 shows the reasons state retirement officials gave for not adopting a DC plan. Officials representing the remaining 27 state retirement programs with a DB component told us that their states had never considered dropping the DB component. The most common reasons state officials gave for not considering the change from a DB to a DC plan were that (1) the DB component provided greater benefits, including survivor and disability benefits; (2) they regarded the DB plan as a better way to retain employees; and/or (3) there was little or no support for the change. Table 7 shows the states that had not considered changing their current retirement plans to a DC plan and the reasons why a DC plan had not been considered. We are sending copies of this report to Senator Fred Thompson, Chairman, and Senator Joseph I. Lieberman, Ranking Minority Member, Senate Committee on Governmental Affairs; Representative Dan Burton, Chairman, and Representative Henry A. Waxman, Ranking Minority Member, House Committee on Government Reform; the Honorable Janice R. Lachance, Director, OPM; and other interested parties. Copies will also be made available to others upon request. Major contributors to this report are listed in appendix VII. Please contact me on (202) 512-8676 if you have any questions concerning this report. Type I plans include three design components: DB, DC with employer contribution, and Social Security. These components are similar to the components found in the Federal Employees’ Retirement System (FERS). The appendix consists of a summary of key features that we selected to facilitate comparisons with FERS and an individual profile for each state plan we categorized as Type I. Age 65 with 3 years of service if born before 1943; age 66 with 3 years of service if born in 1943 or later Minimum age of 50 (Rule of 80)No minimum age (Rule of 90) Table I.2: FERS: Provisions of the Federal Employees’ Retirement System General design and features of plan Defined contribution (DC) with employer contribution Year Social Security coverage was provided Number of years to vest in DB component Number of years to vest in employer’s contribution to DC component Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? General design and features of plan Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Not applicable Table I.3: Minnesota: Provisions of the Minnesota State Retirement System – General Employees’ Plan General design and features of planProvision provided Defined contribution (DC) with employer contribution Year Social Security coverage was provided Number of years to vest in DB component Number of years to vest in employer’s contribution to DC component Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? General design and features of plan Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Table I.4: Missouri: Provisions of the Missouri State Employees’ Retirement Plan General design and features of plan Defined contribution (DC) with employer contribution Year Social Security coverage was provided Number of years to vest in DB component Number of years to vest in employer’s contribution to DC component Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? General design and features of plan Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Not applicable Table I.5: Oklahoma: Provisions of the Oklahoma Public Employees’ Retirement System General design and features of plan Defined contribution (DC) with employer contribution Year Social Security coverage was provided Number of years to vest in DB component Number of years to vest in employer’s contribution to DC component Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? General design and features of plan Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Type II plans include two design components: DB and DC with no employer contribution. These components are similar to the components found in the Civil Service Retirement System (CSRS). The appendix consists of a summary of key features that we selected to facilitate comparisons with CSRS and an individual profile for each state plan we categorized as Type II. Benefit formula 1.5% X FAS X years (first 5 years), 1.75% X FAS X years (next 5 years), 2.0% X FAS X years (years over 10) Are survivor benefits provided after retirement? Are disability benefits provided? Are members required to contribute? Table II.2: CSRS: Provisions of the Civil Service Retirement System General design and features of plan Design components Defined benefit (DB) Defined contribution (DC) with employer contribution DC with NO employer contribution Social Security Year Social Security coverage was provided Year original plan was established Eligibility Number of years to vest in DB component Number of years to vest in employer’s contribution to DC component Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? 5 Not applicable, no employer contribution Age 55 with 30 years of service, age 60 with 20 years of service, age 65 with 5 years of service Highest 3 years Not applicable. This is a federal plan. How the post-retirement COLA is determined: Ad hoc at the discretion of the governing body Fixed rate from year to year Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Supplemental/auxiliary benefits Are survivor benefits provided after retirement? General design and features of plan Is a “pop-up” provision included in the survivor’s benefit options?Are disability benefits provided? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Minimum age and years of service for early retirement Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Type III plans include three design components: DB, DC with no employer contribution, and Social Security. The appendix consists of a summary of key features that we selected to facilitate comparisons with FERS and CSRS, and an individual profile for each state plan we categorized as Type III. Table III.1: Selected Features of Type III State Retirement Plans as of July 1, 1998 10 years Immediate No minimum age (Rule of 80) No minimum age (Rule of 88) 1.67% X FAS X years 60% X (years of service/30) Immediate vesting at age 55. Are survivor benefits provided after retirement? Are disability benefits provided? Are members required to contribute? No minimum age (Rule of 85) Immediate vesting at age 65. Unreduced annuity at 85 years of combined age and service. Immediate vesting at age 65 while an active member. Immediate vesting at age 60. Immediate vesting at age 50 while working in a covered position. Minimum age plus service requirement for unreduced, normal retirement benefits No minimum age - Any age with 35 years of service No minimum age - Any age with 28 years of service No minimum age - Any age with 30 years of service Age 55 with 30 years of serviceNo minimum age (Rule of 80)No minimum age - Any age with 30 years of 5 yearsservice Age 55 with 30 years of service 5 years Age 65 with 5 years of service 5 years Age 55 with 25 years of service5 years Immediate Age 57 with 30 years of service 4 yearsNo minimum age (Rule of 85) Vesting after 3 years at age 60. Are survivor benefits provided after retirement? Are disability benefits provided? Are members required to contribute? Does plan only include state general employees? Minimum age and years of service for early retirement Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? How the post-retirement COLA is determined: Ad hoc at the discretion of the governing body Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Minimum age and years of service for early retirement Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? General design and features of plan Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? How the post-retirement COLA is determined: Ad hoc at the discretion of the governing body Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? General design and features of plan Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Minimum age and years of service for early retirement Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Does plan only include state general employees? Minimum age and years of service for early retirement Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially: required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Minimum age and years of service for early retirement Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? How the post-retirement COLA is determined: Ad hoc at the discretion of the governing body Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Minimum age and years of service for early retirement Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? General design and features of plan Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? General design and features of plan Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past five years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Type IV plans include four design components: DB, DC with employer contribution, DC without employer contribution, and Social Security. The appendix consists of a summary of key features that we selected to facilitate comparisons with FERS and CSRS, and an individual profile for each state plan we categorized as Type IV. Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? General design and features of plan Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? The Type V plan includes three design components: DB, DC with employer contribution, and DC without employer contribution. The appendix consists of a summary of key features that we selected to facilitate comparisons with FERS and CSRS, and the state plan’s profile. Benefit formula 2.0% X FAS X years of service for the first 10 years plus 2.25% X FAS X years of service for the next 10 years plus 2.5% X FAS X years of service for service greater than 20 years. Does plan only include state general employees? Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on investment performance Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Type VI plans include three design components: DC with employer contribution, DC without employer contribution, and Social Security. The appendix consists of a summary of key features that we selected to facilitate comparisons with FERS and CSRS, and an individual profile for each state plan we categorized as Type VI. Number of years to vest in employer’s contribution to DC component Age and service requirements for normal unreduced benefits Final average salary (FAS) calculation Does plan only include state general employees? Minimum age and years of service for early retirement Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? General design and features of plan Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Does plan only include state general employees? Minimum age and years of service for early retirement Cost of living adjustments (COLAs) Has the plan provided post-retirement COLAs in the past 5 years? Variable rate based on the Consumer Price Index Is the COLA based on the “original” or “current” benefit? Are survivor benefits provided after retirement? Is a “pop-up” provision included in the survivor’s benefit options? Does the plan allow for the purchase of service credits? What was the actual employer’s contribution as a percent of covered payroll in 1996? Was 100 percent of the employer’s actuarially required contribution actually contributed in 1996? If no, what percent was actually contributed in 1996? Larry H. Endy, Assistant Director, Federal Management and Workforce Issues Margaret T. Wrightson, the Division’s Associate Director of Tax Policy and Administration and Ernestine Burt-Sanders, Issue Area Assistant, also contributed to this report. Linda J. Libician, Assistant Director Tyra J. DiPalma, Evaluator-in-Charge Cleofas Zapata, Senior Evaluator James W. Turkett, Technical Advisor Jeffrey P. Kaser, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists. | GAO provided information on the: (1) design components of retirement programs that states offer to their general employees and compared them to the design components of the two principal retirement programs for federal employees; and (2) changes states have considered and made to their retirement programs. GAO noted that: (1) all states used two or more of the four design components, but few of their retirement programs had all of the same components as the Federal Employees' Retirement System (FERS) or the Civil Service Retirement System (CSRS); (2) the majority of states--35--included three components and differed by only one component from either FERS or CSRS; (3) the lack of employer contributions to defined contribution (DC) plans distinguished these programs from FERS, and the inclusion of social security coverage distinguished them from CSRS; (4) in the final analysis, three state programs had the same components as FERS and six had the same components as CSRS; (5) GAO's review showed that all states have in some way changed the design components of their retirement programs since the programs were established; (6) developments in federal law that might enhance employee benefits prompted most of these changes; (7) officials representing 21 of the 48 state retirement programs with a defined benefit (DB) component told GAO that their states had recently considered dropping their DB plan component in favor of a program consisting solely of a DC component with an employer contribution and social security; (8) however, only two states have no DB plan, and one of these states--Michigan--recently dropped its DB plan and switched to a DC plan with an employer contribution for its state-sponsored retirement benefits; (9) officials from the 21 states cited reducing government costs, enhancing portability, and lobbying by special interests as the major reasons for considering such a change; (10) they also cited a number of reasons for not dropping their DB plans, the most common were that: (a) studies showed no need for the change; (b) further study was needed; (c) labor unions opposed the change; or (d) there was lack of interest or support for the change; (11) officials representing the other 27 state programs told GAO that their states had never considered dropping their DB component; and (12) the most common reasons state officials gave for not considering such a change were that: (a) the DB component provided greater benefits, including survivor and disability benefits; and (b) they regarded the DB plan as a better way to retain employees. |
Depot maintenance is a key part of the total DOD logistics effort and is a vast undertaking, supporting millions of equipment items, 53,000 combat vehicles, 514,000 wheeled vehicles, 372 ships, and 17,300 aircraft of over 100 different models. Depot maintenance requires extensive shop facilities, specialized equipment, and highly skilled technical and engineering personnel to perform major overhaul of weapon systems and equipment, to completely rebuild parts and end items, to modify systems and equipment by applying new or improved components, or to manufacture parts unavailable from the private sector. DOD’s depot maintenance facilities and equipment are valued at over $50 billion. DOD annually spends about $15 billion—or about 6 percent of its $243 billion fiscal year 1996 budget—on depot maintenance activities. About $2 billion of this amount includes contractor logistics support, interim contractor support, and funds for labor associated with the installation of some major modifications and parts of software maintenance, which are contracted to the private sector using procurement, rather than operation and maintenance funds. The DOD depot system, which is actually comprised of four systems,employs about 89,000 DOD civilian personnel, ranging from laborers to highly trained technicians to engineers and top-level managers. Our recent report on closing maintenance depots provides a history of each of the services’ depot systems. While the number of depot personnel has been reduced by over 40 percent relative to when the DOD depot system was at its peak in 1987, depot facilities and equipment have not been similarly downsized. At the time of the 1995 BRAC process, the DOD depot system had 40 percent excess capacity, based on an analysis of maximum potential capacity for a 5-day week, one 8-hour-per-day shift operation. The Air Force, which had not closed a U.S. depot since the 1960s, had 45 percent excess capacity. Currently, there are 29 major DOD depot maintenance facilities—Army depots, Air Force logistics centers, naval aviation depots, naval shipyards, naval warfare centers, and Marine Corps logistics bases—that perform depot maintenance work—of which 10 are in the process of being closed as DOD maintenance depots as a result of BRAC decisions. Additionally, DOD uses over 1,300 U.S. and foreign commercial firms to support its depot maintenance requirements. Statutes and regulations influence the mix of maintenance work performed by the public and private sectors. For example, as early as 1974, legislation prescribed a specific dollar value mix for public and private sector performance of alteration, overhaul, and repair work for naval vessels. Since then, workload allocation decisions have been influenced by percentage goals found in DOD policy guidance and legislation. DOD Directive 4151.1, “Use of Contractor and DOD Resources for Maintenance of Materiel,” directed the services to plan for not more than 70 percent of their depot maintenance to be conducted in DOD depots to maintain a private sector industrial base. The most basic of the legislative mandates governing the performance of depot-level workloads is 10 U.S.C. 2464, which provides for a “core” logistics capability to be identified by the Secretary of Defense and maintained by DOD unless the Secretary waives DOD performance as not required for national defense. Traditionally, core was defined as the capability, including personnel, equipment, and facilities, to ensure timely response to a mobilization, national contingency, or other emergency requirement. The composition and size of this core capability are at the heart of the depot maintenance public-private mix debate. Other statutes affect the extent to which depot-level workloads can be converted to private sector performance. Two of the most significant are 10 U.S.C. 2466 and 10 U.S.C. 2469. The first prohibits the use of more than 40 percent of the funds made available in a fiscal year for depot-level maintenance or repair for private sector performance: the so-called “60/40” rule. The second provides that DOD-performed depot maintenance and repair workloads valued at not less than $3 million cannot be changed to performance by another DOD activity without the use of “merit-based selection procedures for competitions” among all DOD depots and that such workloads cannot be changed to contractor performance without the use of “competitive procedures for competitions among private and public sector entities.” In recent years DOD has sought relief from both these two statutes. DOD and the Congress are defining the role of DOD depots in the post cold war era, much in the same way the roles of U.S. war-fighting forces are being reshaped. The new model for managing depot maintenance has not yet emerged. However, given DOD’s depot maintenance policy report, the model apparently will be a mix between public and private sector capabilities, but with a clear shift toward greater reliance on the private sector. DOD’s March 1996 Depot-Level Maintenance and Repair Workload Report projected a significant increase in the depot work that will be privatized. Further, since the services periodically reevaluate their core workload requirements, it is unknown how much more of their current work will be determined to be non-core and privatized. Unless effectively managed, including downsizing of remaining depot infrastructure, a major shift in depot workloads to the private sector would exacerbate existing excess capacity in the DOD depot maintenance system. Historically, depot maintenance on wartime critical DOD systems has been largely performed in DOD depots. Based on both cost and risk factors, the general DOD policy was to rely on DOD depots to provide a cost-effective and reliable source of support for wartime readiness and sustainability. With some exceptions, peacetime maintenance of weapon systems with wartime taskings was performed in DOD depots. This peacetime workload constituted depot maintenance core. Core was determined by quantifying the depot work that would be generated under war scenarios and then computing the amount of peacetime work needed to employ the number of people necessary to support the anticipated wartime surge. Peacetime workload was composed of a mix of high and low-surge items allowing employees to transfer from low surge workload to high surge workload during war. While there were always a number of potential war scenarios, the depots were sized to support a sustained global war. During the cold war, there was not much pressure to move work from DOD depots to the private sector. Military leaders expressed a clear preference for retaining much of their work in DOD depots, which were highly flexible and responsive to changing military requirements and priorities. The quality of the DOD depots was high and users were generally well-satisfied with the depots’ work. Further, the threat of a global war and the resulting stress on the logistics system were constant reminders of the need to maintain the flexibility and responsiveness the depot system provided. Historically, DOD has reported that about 70 percent of its depot maintenance work was performed in DOD depots. In our 1994 testimony before the Readiness Subcommittee of the House Armed Services Committee, we stated that the private sector more likely receives about 50 percent of the DOD depot maintenance budget. We noted that a portion of the funds expended on the maintenance workload assigned to the public sector ultimately was used for private sector contracts for parts and materiel, maintenance and engineering services, and other goods and services. Additionally, some types of depot maintenance activities, such as interim contractor support and contractor logistics support, were not included in previously reported statistics. Our review of data in DOD’s March 1996 workload report indicates that by fiscal year 1997, the mix will be about 64 percent in the public sector and 36 percent in the private sector. Further analysis indicates that the data does not include funds reported by the services for interim contractor support, contractor logistics support, or goods and services that the DOD depots ultimately buy from the private sector. Including these funds would change the mix to about 53 percent in the public sector and 47 in the private sector. While the Department’s projection for the public-private mix in 2001 is 50 percent in each sector, our analysis indicates that it is actually about 37 percent in the public sector and 63 in the private sector. Further, since the services are conducting risk analyses to further define their minimum core capability, the DOD depots’ share of funding could be reduced even further. With the end of the cold war and the subsequent declines in defense spending, there are increased pressures to privatize more depot maintenance work. Those declines affected force structure and the public and private activities supporting force structure. As acquisition programs began to decline, a growing concern arose over the impact on the defense industrial base. Particular concern focused on how that industrial base could be maintained without the large development and production programs of the past, and attention began to shift to DOD depot workloads as a potential source of work to keep the industrial base viable. Advocates of more private sector involvement argue that a shift toward the private sector would not only help keep the private sector production base healthy during a period of reduced weapon procurement but also could result in lower costs, since the private sector could provide depot maintenance at lower cost than the public sector. Proponents of the DOD depot system believe the DOD depots have provided a quality, responsive, and economical source of repair. They note that DOD maintenance policy for many years has supported the outsourcing of depot maintenance work when it was determined to be cost-effective to do so. Further, they contend there are substantial differences between developing and producing new systems and maintaining fielded ones and that the dollars spent on maintenance, while not small, cannot fill the void created by declining production dollars. Section 311 of the National Defense Authorization Act for Fiscal Year 1996 is an indication of congressional intent regarding the continued need for DOD depots: It is the sense of Congress that there is a compelling need for the Department of Defense to articulate known and anticipated core maintenance and repair requirements, to organize the resources of the Department of Defense to meet those requirements economically and efficiently, and to determine what work should be performed by the private sector and how such work should be managed. Section 311 also directed the Secretary of Defense to develop a comprehensive policy on the performance of depot-level maintenance and repair for the Department of Defense that maintains the core capability described in 10 U.S.C. 2464 and report to the Senate Committee on Armed Services and House Committee on National Security. The section further directed that in developing the policy, the Secretary should include certain elements such as interservicing, environmental liability, and exchange of technical data. The Congress supports preserving a DOD depot maintenance system to support core requirements. With no additional BRACs scheduled, the Department was charged with developing a depot maintenance policy that provides adequate workloads to ensure cost efficiency and technical proficiency in time of peace. We are analyzing DOD’s depot maintenance policy and workload analysis reports, as required by Section 311, and will be reporting our findings by May 18, 1996. However, as requested, I am providing our observations to date on the policy report. First, it provides an overall framework for managing DOD depot maintenance activities. Second, it sets forth a clear preference for moving workload to the private sector, which will likely result in a much smaller core capability than exists today. Third, it is not consistent with congressional guidance in one key area—the use of public-private competitions. Fourth, the policy provides substantial latitude in implementation. As a result, the precise affect of this policy on such factors as public-private mix, cost, and excess capacity remain uncertain. In response to the congressional requirement for a comprehensive statement of depot maintenance policy, DOD provided an overall framework for managing DOD depot maintenance activities. The policy report reiterates some past policies and identifies some new initiatives for depot-level maintenance. It references other directives, publications, memorandums, and decisions and notes that DOD plans to develop an updated single publication with applicable maintenance policy guidance. Our assessment is based on observations to date about the policy report and other related documents. The policy report clearly states that the Department has a preference for privatizing maintenance support for new systems and for outsourcing non-core workload. It represents a fundamental shift in the historical policy of relying on DOD depots to provide for the readiness and sustainment of wartime tasked weapon systems. Section 311 of the authorization act states that the DOD policy should provide that core depot-level maintenance and repair capabilities are performed in facilities owned and operated by the United States. It also states that core capabilities include sufficient skilled personnel, equipment, and facilities that are of the proper size to ensure a ready and controlled source of technical competence, and repair and maintenance capability necessary to meet the requirements of the National Military Strategy and other requirements, and to provide for rapid augmentation in time of emergency. Core, as set forth in the policy and workload reports, no longer means that wartime work will be performed primarily by DOD depots. DOD’s core concept is for its depots to perform maintenance requirements that the service secretaries identify as too risky for the private sector to perform. In determining core workloads, the DOD policy calls for maintaining only “minimum capability”—which does not necessarily mean an actual workload for a depot. What once was calculated as core is now called pre-risk core. For those mission essential workloads that historically would dictate retention of a core capability, the services will conduct a risk assessment to determine if the work should be made available for competition within the private sector. The policy guidance provides some limited criteria for performing a risk assessment, but DOD has not yet developed guidelines for making those assessments in a consistent manner. It is unclear the extent measured criteria or subjective judgement will be used for such assessments. In a similar vein, DOD’s policy on depot maintenance seeks to severely limit the use of DOD depots for new weapon systems. Section 311 provides for the performance of maintenance and repair for any new weapon systems defined as core in facilities owned and operated by the United States. On the other hand, the Department reported to the Congress in August 1995 that it intended to privatize depot maintenance for new systems and reported in its January 1996 depot maintenance privatization initiative that it intended to freeze the transition of new workloads to DOD depots. The policy report and other recently issued DOD guidance, such as DOD Instruction 5000.2, also show that DOD’s maintenance concept for new and modified systems will minimize the use of DOD depots. This preference, in combination with DOD’s minimum core concept and limited public-private competitions, if not effectively managed—including reducing infrastructure and developing competitive markets—would likely result, over the long term, in DOD depots becoming an economic liability rather than a cost-effective partner in the total DOD industrial base. The DOD policy report states that the Department will provide for cost efficiency, sufficient workload, and technical proficiency in its depots. However, accomplishing this objective will be difficult given that the depots already are underutilized and the policy providing for additional outsourcing would exacerbate that situation, unless there are additional depot closures. Further, the report does not provide a clear indication, aside from recognizing ongoing BRAC actions, on how the Department intends to downsize to minimum core. While we are in the process of reviewing the policy report for consistency with congressional direction and guidance, our observation to date is that the report is inconsistent in one key area—the use of public-private competitions for allocating non-core depot maintenance workloads. Section 311(d)(5) of the act provides that in cases of workload in excess of the workload to be performed by DOD depots, DOD’s policy should provide for competition “between public and private entities when there is sufficient potential for realizing cost savings based upon adequate private-sector competition and technical capabilities.” DOD’s report provides a policy that is inconsistent with this instruction. According to DOD, it will engage in public-private competition for workloads in excess of core only when it determines “there is not adequate competition from private sector firms alone.” The report did not clarify what would constitute adequate competition. Under this policy, DOD depots would be used sparingly for public-private competitions and DOD depots cannot compete for all non-core workloads, where adequate private sector competition exists, even though the DOD depots could offer the most cost-effective source of repair. We have reported that public-private depot maintenance competitions can be a beneficial tool for determining the most optimum cost-effective source of repair for non-core workloads. As noted in our recent reports on the Navy’s depot maintenance public-private competition programs for ships and aviation, we found that these competitions generally resulted in savings and benefits and provided incentives for DOD depot officials to reengineer maintenance processes and procedures, to develop more cost-effective in-house capability and to ensure that potential outsourcing to the private sector is more cost effective than performing the work in DOD depots. We recognize that DOD’s public-private depot maintenance competition program raised concerns about the reliability of DOD’s depot maintenance data and the adequacy of its depot maintenance management information systems. These deficiencies are not insurmountable. As we noted in prior reports, many of the problems were internal control deficiencies that can be addressed with adequate top-level management attention. We also noted that some corrective actions have already been undertaken and additional improvements can be made. Further, we recommended that the Defense Contract Audit Agency be used to certify internal controls and accounting policies and procedures of DOD depots to assure they are adequate for identifying, allocating, and tracking costs of depot maintenance programs and to ensure proper costs are identified and considered as part of the bids by DOD depots. DOD has stated that it plans to use the Defense Finance and Accounting Service to review and certify the accounting systems of DOD depots. The policy report provides wide implementation latitude in a number of key areas. For example, it provides for a DOD depot capability, but the ultimate extent of such capability, and hence DOD depot requirements, could be substantially reduced depending on future core workload assessments of privatization, readiness, sustainability, and technology risks. Depending on implementation, the policy’s preference for privatization and the lack of a clear and consistent methodology for determining risks will likely lead to significant amounts of workload previously designated as core being reclassified as non-core and privatized. For example, with respect to the Aerospace Guidance and Metrology Center, the Air Force is privatizing depot maintenance operations involving 627,000 direct labor hours of work—100 percent of which had been previously defined as core—stating that because the workload is being privatized-in-place, the risk is manageable. It is unclear how risky that privatization may turn out to be, particularly in light of the contractor’s interest in divesting itself of its defense business. However, a similar rationale is being used to support other in-place privatizations. With this predilection, it is likely that future core will represent something far different than it did in the past. For example, DOD’s March 1996 workload report noted that core would ensure “that the Air Force establishes and retains the capabilities needed to assure competence in overseeing depot maintenance production that has both public and private sector elements”—a significantly different mission than that historically envisioned for DOD’s core capability. Further, DOD’s March 1996 report to Congress, Improving the Edge Through Outsourcing, included intermediate maintenance of DOD weapons and equipment—another function traditionally considered core—as one which the Department will now consider privatizing. The policy also provides wide latitude in several areas where the decision for determining public or private sector source of repair is based on an assessment of what is “economical” or “efficient.” For example, the policy states that non-core workloads be made available for only private sector competition when it is determined that the private sector can provide the required capability with acceptable risks, reliability, and efficiency. This efficiency requirement does not require the inclusion of the public sector to ensure that privatization is the most cost-effective option. The underlying assumption behind DOD’s depot maintenance privatization initiative is the expectation that savings of 20 percent will be achieved and these savings will be made available to support the services’ modernization programs. Our analysis indicates that this assumption is unsupported. The data cited by Department officials to support this savings assumption is the Report of the Commission on Roles and Missions of the Armed Forces. In May 1995 the CORM concluded that 20 percent savings could be achieved by the privatization of various commercial activities and recommended that DOD transfer essentially all depot maintenance to the private sector. The Commission rejected the notion of core and recommended that DOD (1) outsource all new support requirements, particularly the depot-level logistics support of new and future weapon systems and (2) establish a time-phased plan to privatize essentially all existing depot-level maintenance. In its August 1995 response to the Congress on the CORM report, DOD noted that the Department agreed with the Commission’s recommendation to outsource a significant portion of its depot maintenance work, including depot maintenance activities for new systems. However, the DOD response noted that DOD must retain a limited core depot maintenance capability to meet essential wartime surge demands, promote competition, and sustain institutional expertise. We found that the Commission’s assumptions on savings from privatization generally were based on reports of projected savings from public-private competitions for various commercial activities as part of the implementation of OMB Circular A-76. These commercial activities reviews included various base operating support functions, such as family housing, real property and vehicle maintenance, civilian personnel administration, food service, security and law enforcement, and other support services. While these activities were varied in nature, they had similarities in that they generally involved low-skilled labor; required little capital investment; generally involved routine, repetitious tasks that could readily be identified in a statement-of-work; and had many private sector offerors who were interested and had the capability to perform the work. Our review of A-76 competitions and public-private competitions for depot-level maintenance found that the conditions under which A-76 competitions resulted in lower private sector prices were often not present or applicable to depot maintenance. Specifically, we found that: Reengineered government activities won about half of the A-76 competitions because they could provide the work cheaper. Our work shows that for public-private competitions involving depot maintenance activities, a program authorized by the Congress and implemented independently from A-76, DOD depots won 67 percent of the non-ship competitions. Public-private competitions for ships provided a unique situation wherein private sector offerors could bid marginal or incremental costs while DOD depots were required to bid full costs—a condition which, in concert with the more competitive nature of the ship repair market, led to the public shipyards not being competitive. When the private sector won A-76 competitions, savings were significantly higher than when the government function was performed by military personnel. The additional costs of military pay and benefits when coupled with productivity losses incurred for additional duties resulted in decreased competitiveness of the military personnel assigned to these duties. Depot maintenance, on the other hand, is performed almost exclusively with civilian personnel. The A-76 competitions did not involve activities comparable to depot maintenance—which is far more complex, less repetitious, and involves many unique systems not found in the private sector. Problems associated with statements of work in A-76 competitions resulted in cost increases for privatized work because of contract modifications to more explicitly define required work—a condition we also identified in our review of DOD’s public-private program for depot maintenance. The impact of this cost growth for depot maintenance competitions can be illustrated by submarine repair competitions. While the average award amount for private shipyards was 16 percent less than that for competitions won by the public sector, greater cost growth in the private sector resulted in the average actual costs being about the same. While the A-76 commercial activity competitions resulted in savings, the savings were not readily quantifiable, did not consider the cost of the competition or the administration of the contracts, and for those competitions that were audited, savings were often less than projected. We had similar findings in our review of public-private competitions for depot maintenance. Additionally, we found that for the non-ship depot maintenance competitions won by a DOD depot, the DOD depots’ bids averaged 40 percent less than the lowest private sector offeror. Where we observed cost growth in the limited number of depot competitions we analyzed, the growth was not sufficient to result in the DOD depots’ costs exceeding the bid of the lowest private sector offeror. The A-76 competitions were conducted in a highly competitive private sector market—frequently involving 4 or more offerors, with 10 percent of the competitions involving 11 or more offerors. Savings were much higher for those A-76 competitions won by the private sector where there were 5 or more private sector offerors. Our review of DOD’s 95 non-ship depot maintenance public-private competitions showed the private sector market to be significantly less competitive. Twenty-two of the competitions had no private offerors and 33 had only one. Only 28 of these competitions had three or more offerors, while the number of offerors averaged less than two per competition. Recognizing the influence of competition on achieving savings from privatization, we analyzed the competitiveness of DOD’s non-ship depot maintenance repair contracts. We asked 12 DOD buying commands to identify depot maintenance contracts that were open during 1995. They identified 8,452 contracts valued at $7.3 billion and, based on high dollar value, we selected 240 contracts valued at $4.3 billion to analyze the commands’ use of competitive procedures for the contracted workloads. The following table shows the results of our analysis. As shown, the 12 buying commands awarded (1) 182, or 76 percent, of the contracts through sole-source negotiation; (2) 49, or 20 percent, through full and open competition, and (3) 9, or 4 percent, by limited competition. The 49 fully competitive awards accounted for about 51 percent of the total dollar value while the 182 sole-source contracts accounted for about 45 percent of the dollar value. In reviewing the number of offerors for the 49 contracts valued at $2.2 billion that were awarded through full and open competition, we found that the commands averaged 3.6 offers for the 49 contracts—ranging from a low of only 2 offers to a high of 10. For 30 of the 49 contracts—about 86 percent of the $2.2 billion—the number of offers was 4 or less. Five contracts valued at $525.8 million had only two offers, while only 19 contracts valued at $309.4 million had five or more offers. We also found that a large portion of the dollar value of the contracts went to a relatively small number of contractors. Although the total number of contractors involved in the 240 contracts was 71, 13 of these contractors had most of the workload, about 76 percent of the $4.3 billion. Three of these 13 contractors had workload valued at $1.3 billion, about 30 percent of the $4.3 billion. Our analysis of depot maintenance contracts showed that the private sector market was more competitive for certain types of systems and equipment than for others. For example, awards for repair of ground vehicles, trucks, airframes, engines, and other items were more often competitive while sole-source contracts were prevalent for fire control systems, communications and radar equipment, electronic components, and other components. We found that the buying commands sometimes used both DOD depots and private sector sources for repair of a limited number of items. To make price comparisons, we looked at 414 items that buying activities identified as being maintained in both sectors. For 62 percent of the items, the contract price was higher than the price for the same item repaired in a DOD depot. We also analyzed the impact of other conditions relevant to creating a competitive environment. Regarding the ability to clearly define the service to be provided, the buying commands reported that depot maintenance activities present a difficult challenge. For much of the depot maintenance work, specific tasks that must be done, spare and repair parts that will be required, and the type and skill-level of the labor required cannot be identified until the equipment or component is inducted into the repair facility for inspection and repair. Our review of depot maintenance contracts showed the difficulty in constraining cost growth in this environment—particularly when cost-type contracts are used. It also showed the large costs normally associated with drafting statements of work, conducting the competitions, and administering the contracts. At one buying activity which obligates about $180 million per year for depot maintenance contracts, we found sole-source contracts were used 100 percent of the time—many of which were also cost reimbursable. Officials said they did not have the manpower, technical data, technical manpower, or contracting skills to use competitive contracting. Additionally, officials noted that the process for qualifying repair sources is difficult and time-consuming. There have been a number of recent initiatives to privatize depots recommended for closure or realignment by BRAC. The most prominent among these so-called “in-place” privatization initiatives involve the Aerospace Guidance and Metrology Center, a depot recommended for closure by the 1993 BRAC Commission and located on Newark Air Force Base, Ohio, and the Sacramento and San Antonio Air Logistics Centers, which were recommended for closure by the 1995 BRAC Commission and are located on McClellan Air Force Base, California, and Kelly Air Force Base, Texas, respectively. We previously reported that, although it may be several years before the total cost of privatizing the Aerospace Guidance and Metrology Center’s depot maintenance workload can be identified, our preliminary analysis indicated that this privatization will likely increase, rather than decrease, depot maintenance costs. In addition, our recent analysis of 254 contract items disclosed that (1) unit costs were higher after privatization for 201, or about 79 percent, of the items and (2) overall, there was a net cost increase of $6.01 million for the 254 items. Further, although the Air Force is projecting annual savings of $5 million for the last 4 years of the 5-year contract, we found that the Air Force did not include all relevant costs in its analysis. For example, our analysis showed that the Air Force’s estimated prices for eight contract items did not include such items as material costs totalling $15 million. We also reported on the potential impact of privatizing the San Antonio Air Logistics Center’s engine workload in place rather than transferring the work to the Oklahoma City Air Logistics Center. Specifically, we reported that consolidating San Antonio’s engine workload with Oklahoma City’s engine workload would reduce Oklahoma City’s overhead rate for engine work by as much as $10 an hour and would result in an estimated annual savings of $76 million. As requested by Chairman Spence, we are conducting a more thorough review of the Department’s privatization-in-place initiatives, particularly those underway at San Antonio and Sacramento. Our preliminary observations on these initiatives follow. The BRAC Commission’s July 1995 report to the President noted that the decision to close the Sacramento and San Antonio Air Logistics Centers was a difficult one to make, but was necessary given the Air Force’s significant excess depot capacity and limited defense resources. The Commission report also concluded that these actions should save about $151.3 million over the 6-year implementation period and $3.5 billion over 20 years. Since this announcement, DOD has moved forward with its privatization efforts at these locations, including the announcement that contracts for five prototype workloads are to be awarded by the close of 1997. When the President forwarded the BRAC Commission recommendations to the Congress, he stated that his intent was to privatize the work in place or in the local communities in order to (1) avoid the immediate costs and disruption in readiness that would result from the relocation of the centers’ missions, (2) mitigate the impact on the local communities, and (3) preserve important defense work forces. The administration also decided to delay the centers’ closures until the year 2001 to further mitigate the adverse impact on the local communities. Our analysis indicates that delaying the centers’ closures until 2001 could increase net costs during the 6-year BRAC implementation period by hundreds of millions of dollars, primarily because it would limit the Air Force’s ability to achieve recurring savings to offset expected closure costs. Additionally, although the closures’ potential impact on local communities and readiness is a valid concern, actions can be taken to limit the impact. For example, the Sacramento community’s successful conversion of the Sacramento Army Depot to private use has demonstrated that this conversion, although difficult, can be accomplished. Further, according to Navy depot maintenance officials, on-going efforts to quickly close three aviation depots have had no significant impact on readiness. Our preliminary analysis also indicates that privatizing the two centers’ depot maintenance workloads in place is likely to be a more costly alternative than transferring the workloads to the three remaining centers. One reason for this is that there are substantial costs associated with privatization-in-place that do not apply to DOD maintenance depots. For example, our analysis indicates that unique requirements such as the cost of proprietary data rights, contractor profits, and contractor oversight could add 20 percent, or more, to the cost of performing the work. Further, the cost plus contract that will likely be used is not conducive to generating significant private sector economies, a situation already unfolding at the Aerospace Guidance and Metrology Center. More significantly, our analysis indicates that privatization-in-place eliminates the opportunity to consolidate workloads at the remaining centers and to, thereby, achieve substantial “economy of scale” savings and other efficiencies. The Air Force’s five air logistics centers currently have approximately 57.3 million direct labor hours of depot maintenance capacity to accomplish about 29.3 million hours of workload (projected fiscal year 1999)—leaving a projected excess capacity of 49 percent in 1999. The BRAC decision to close the San Antonio and Sacramento Air Logistics Centers provides the Air Force the opportunity to redistribute workload to the remaining three air logistics centers, thereby reducing excess capacity within its depot system to about 8 percent. Our analysis indicates that redistributing 8.2 million hours of work from Sacramento and San Antonio to the remaining centers would allow the Air Force to achieve annual savings of as much as $182 million. According to financial management officials at the receiving air logistics centers, one-time workload transition costs of about $475 million would be required to absorb the additional workloads, indicating that net savings would occur within 2-1/2 years of the transition completion. On the other hand, if the remaining centers do not receive additional workload, they will continue to operate with significant excess capacity, becoming more and more inefficient and more and more expensive as their workloads continue to dwindle due to downsizing and privatization initiatives. Finally, various statutory restrictions may affect the extent to which depot-level workloads can be converted to private-sector performance—through privatization-in-place or otherwise—including 10 U.S.C. 2464, 10 U.S.C. 2466, and 10 U.S.C. 2469. While each of these statutes has some impact on the allocation of DOD’s depot-level workload, 10 U.S.C. 2469 constitutes the primary impediment to privatization in the absence of a public-private competition. Competition requirements of 10 U.S.C. 2469 have broad application to all changes to the depot-level workload valued at $3 million or more currently performed at DOD installations, including Kelly and McClellan. The statute does not provide any exemptions from its competition requirements and, unlike most of the other laws governing depot maintenance, does not contain a waiver provision. Further, there is nothing in the Defense Base Closure and Realignment Act of 1990—the authority for the BRAC recommendations—that, in our view, would permit the implementation of a recommendation involving privatization outside of the competition requirements of 10 U.S.C. 2469. The determination of whether any single conversion to private-sector performance conforms to the requirements of 10 U.S.C. 2469 depends upon the facts applicable to the particular conversion. We do not have DOD’s position regarding how it plans to comply with the statutory restrictions. While DOD has stated that it will structure these conversions to comply with existing statutory restrictions, details of the Department’s privatization plans for Kelly and McClellan are still evolving. Further, “in-place” privatizations at Newark, Kelly, and McClellan are now the subject of litigation. In March 1996, the American Federation of Government Employees filed a lawsuit challenging these privatization initiatives, contending that they violate the public-private competition requirements of 10 U.S.C. 2469 and other depot maintenance statutes. While our analysis of DOD’s depot policy report continues, we believe there are several points the Congress needs to consider as it contemplates the repeal of 10 U.S.C. 2466 and 10 U.S.C. 2469—two statutes that influence the allocation of depot maintenance workload between the public and private sectors. First, the policy does not provide for participation of DOD depots in depot maintenance competitions for non-core workload as directed by the Congress. Second, since the policy provides wide latitude during implementation, likely outcomes of the policy change are difficult to predict. Third, cost savings are likely achievable from some depot privatization, but not in the percentages and scope predicted by the CORM. Fourth, privatization-in-place does not appear to be cost-effective given the excess capacity in DOD’s depot maintenance system. Given these considerations, the Congress needs to assure itself that any new policy has the intended required features and that a process is in place to monitor readiness, sustainability, and cost considerations. Further, the effective implementation of the new policy will require a further downsizing of the Department’s remaining depot maintenance infrastructure and the development of more competitive private sector markets. Thank you Mr. Chairman that completes my statement. I would be happy to answer questions at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the privatization of defense depot maintenance activities. GAO noted that: (1) the Department of Defense's (DOD) evolving depot maintenance policy includes a public-private mix and shifts work to the private sector where feasible; (2) depot privatization could worsen excess maintenance capacity and inefficiencies if not carefully managed; (3) DOD's policy report provides an overall framework for managing depot maintenance activities and substantial implementation flexibility, but the policy is not consistent with congressional guidance providing for public-private competition for non-core workloads; (4) privatizing depot maintenance is not likely to achieve the 20-percent savings DOD projects, since the 20-percent savings were for commercial-type activities that more readily led to competition which produced the reported savings; (5) most non-ship depot maintenance private-public competitions have been won by the public sector and averaged fewer than two competitors; (6) DOD plans to privatize in place and delay downsizing and closure of two air logistics centers will probably cost more than closing them and relocating their workloads to underutilized defense or private facilities; and (7) statutes governing competition and base closures may have to be repealed or amended before DOD can proceed with its privatization efforts. |
DOD instruction 1330.04 outlines the following roles and responsibilities regarding the Armed Forces Sports Program: Principal Deputy Under Secretary of Defense for Personnel and Readiness: Provides guidance and oversight concerning the participation of servicemembers in Armed Forces, national, and international amateur sports competitions. Senior Military Sports Advisor: Serves as the Service Personnel Chief who is responsible for the management and operation of the program and reports to the Principal Deputy Under Secretary of Defense for Personnel and Readiness. Armed Forces Sports Council: Serves as the governing body of the program, and is composed of the Morale, Welfare, and Recreation representatives from each service or their designated representatives. Armed Forces Sports Council Secretariat: Serves as the executive office for the council and serves as the U.S. liaison to the International Military Sports Council. Armed Forces Sports Council Working Group: Serves as the staffing body of the Armed Forces Sports Council, which is composed of Morale, Welfare, and Recreation representatives from each service. Secretaries of the Military Departments: Develop sports programs based on specific needs and mission requirements that provide the opportunity for servicemembers to prepare for and compete in national and international amateur sports competitions on a voluntary basis. According to Sports Council Secretariat officials and the policies for managing servicemembers’ participation in national and international amateur sports competitions, the Sports Council Secretariat and the service sports offices each have responsibilities for managing the Armed Forces Sports Program. Table 1 further describes the responsibilities of the Armed Forces Sports Council Secretariat’s and the service sports offices for the Armed Forces Sports Program. The number of staff members working in the Armed Forces Sports Council Secretariat and the service sports offices and the percentage of time staff members spend working for the Armed Forces Sports Program varies. For example, the Navy Sports Office has two staff members who work on the program nearly full time, while the Army Sports Office has four staff members who work on the program part time. In addition, the staff members working for the Armed Forces Sports Program include both civilians and active-duty servicemembers. Table 2 provides further details on the number of staff members and the estimated percentage of time they spend working for the Armed Forces Sports Program. DOD has data on participation in and costs of the Armed Forces Sports Program, but has not taken steps, including developing performance measures and clarifying roles and responsibilities, that are needed to help ensure that the program is implemented effectively. Sports Council Secretariat officials provided us with data for fiscal years 2012-2016 on servicemember participation in the program, including on the number of days servicemembers are away from their unit participating in the program and on civilians supporting the program, and data for fiscal years 2014-2016 on program costs. In analyzing the number of servicemembers participating in the program, we found that servicemember participation changed from 968 servicemembers in fiscal year 2012 to 848 servicemembers in fiscal year 2016. Table 3 provides further details about the number of servicemembers who participated in or supported the Armed Forces Sports Program in fiscal years 2012-2016. We also found that servicemember participation ranged from an average of 6.8 days per event in fiscal year 2013 to 13.2 days per event in fiscal year 2016. Sports Council Secretariat and service officials stated that the servicemembers who participate in the program are in peak physical shape and that they were unaware of any additional recovery time that a participant has needed after competing. Table 4 breaks out these data for each year from fiscal years 2012 through 2016. According to officials, DOD civilians provide various types of support to the Armed Forces Sports Program and may include employees who work for the program on a full- or part-time basis, as well as those who serve in a volunteer capacity. Civilians who support the program as volunteers may serve in a variety roles, as coaches or as staff, for example, which include athletic trainers, service representatives, or medical staff. Table 5 provides further details on the number of civilians who supported the Armed Forces Sports Program in fiscal years 2012 through 2016. Sports Council Secretariat officials stated the program covers the cost of servicemembers participating and units do not have to provide any funding. Program costs ranged from about $2.1 million to about $2.8 million from fiscal years 2014 through 2016. Table 6 provides additional details about these costs. Armed Forces Sports Championships are hosted by one of the services and must include at least three of the services in competition for all team sports and most individual sports. Higher level competitions are attended by the most competent athletes from the Armed Forces Sports Championships or athletes selected based on other qualifying events or criteria and may include U.S. national, International Military Sports Council, or other international events. In table 7 we break out the costs for participation in events from table 6 associated with Armed Forces Sports Championships and higher level competitions for fiscal years 2014 through 2016. While DOD has data on program participation and cost, these data are outputs and not outcomes and therefore do not exhibit important attributes of successful performance measures that are necessary to demonstrate that the Armed Forces Sports Program is being implemented effectively. Federal internal control standards state, among other things, that managers should establish activities to monitor performance measures. Furthermore, our prior work on performance measurement identified ten key attributes of performance measures, such as clarity, objectivity, having a measurable target, and having baseline and trend data in order to identify, monitor, and report changes in performance and to help ensure that performance is viewed in context. Table 8 identifies each attribute of effective performance measures along with its definition. Sports Council Secretariat officials stated that they use data on the number of servicemembers and services annually participating in each sport and competition to measure the performance and effectiveness of the Armed Forces Sports Program. While these data provide important context about the program’s size and reach, they are outputs and do not constitute performance measures because they do not exhibit several of the key attributes previously discussed. First, we found that the Sports Council Secretariat’s use of participation data does not exhibit the attribute of linkage in that there is not clear alignment between the number of participants and how it affects the program’s ability to achieve its goals and mission. For example, while DOD Instruction 1330.04 does not specify goals or a mission, the Armed Forces Sports Council’s standard operating procedures identify that the five objectives of the program are to: (1) promote goodwill among the Armed Services through sports, (2) promote a positive image of the Armed Forces through sports, (3) provide the incentive and encourage physical fitness by promoting a highly competitive sports program, (4) provide a venue for military athletes to participate in national and international competitions, and (5) engage in valuable military-to-military opportunities with International Military Sports Council member nations through sports. However, Sports Council Secretariat officials have not established a link between the participant data that they stated are used to measure program performance and the achievement of these objectives. Further, our prior work has shown that linkages between goals and measures are most effective when they are clearly communicated and create a “line of sight” so that everyone understands what an organization is trying to achieve and the goals it seeks to reach. During meetings with the Sports Council Secretariat, officials stated that they use data, such as servicemember participation in the Armed Forces Sports Championships, International Military Sports Council Championships, U.S. Nationals, and the Olympic and Paralympic Games to measure the performance and effectiveness of the Armed Forces Sports Program, and that they have created performance measures on an as-needed basis when it has been necessary to prioritize the allocation of funds for individual sports. However, none of the documents we were provided on the program identify participation or any other data as a performance measure, and these efforts do not exhibit a deliberate, four-stage performance measurement process that involves (1) identifying goals, (2) developing performance measures, (3) collecting data, and (4) analyzing data and reporting results. Further, servicemember participation in the Olympic and Paralympic Games is not a valid performance measure because, according to officials from the Office of the Secretary of Defense, the Sports Council Secretariat, and the services, the Armed Forces Sports Program does not have responsibility for these games. Second, participation data do not exhibit the measurable target attribute because they represent a summary of the program’s activity and are not associated with numerical goals, which are needed to gauge program progress and results. Our prior work has shown that numerical targets or other measurable values facilitate future assessments of whether overall goals and objectives are achieved because comparisons can be easily made between projected performance and actual results. While the Sports Council Secretariat’s data included the “actual” number of program participants, they did not identify projected performance targets that would enable program officials to determine how far they have progressed toward a desired outcome or end state. In response to our analysis, Sports Council Secretariat officials stated that they consider the list of 24 sports and total number of competitions that servicemembers may participate in to be the target—the attainment of which is based on variables such as available funding and the extent to which each service agrees to provide teams to participate in the competitions. However, this is not a valid demonstration of this attribute because neither the target in this sense nor the variables affecting participation (e.g., funding and service branch involvement) demonstrate how well the Armed Forces Sports Program performs or carries out its mission. In addition, officials from the Sports Council Secretariat and the services stated that the program directly benefits the services’ readiness, recruitment, and retention efforts. Specifically, officials cited the program’s emphasis on a higher level of physical fitness than is otherwise required by the services as contributing to individual servicemember readiness, and involvement in national and international sports championships as aiding recruiting efforts because it showcases some of the unique opportunities open to those in the services. Further, officials stated that the opportunity to participate in higher level competitions through the program helps retention because it provides an incentive for some servicemembers to stay in the services. However, outside of participation and cost data and some anecdotal examples, officials did not have specific measures for or data on the Armed Forces Sports Program’s contribution to the services’ readiness, recruiting, and retention efforts. Third, while DOD has program participation data, it does not track baseline and trend data in order to assess the program’s performance and progress over time. Our prior work has demonstrated that by tracking and developing a performance baseline for all measures—including those that demonstrate the effectiveness of a program—agencies can better evaluate progress made and whether or not goals are being achieved. Further, identifying and reporting deviations from the baseline as a program proceeds provides valuable information for oversight by identifying areas of program risk and their causes for decision makers. According to Sports Council Secretariat officials, many of the program’s benefits—such as helping with readiness, recruitment, and retention—are not measured, and commanding officers are responsible for determining and managing the program’s effect on the readiness of their units. Thus, given the relatively small number of program participants and participation being contingent on obtaining commanding officer approval, Sports Council Secretariat officials stated that they do not believe that the services’ readiness is negatively affected by servicemembers participating in the Armed Forces Sports Program. We acknowledge that the measurement of the program’s performance may be difficult, but DOD’s participation data do not include targets allowing program performance to be measured and do not assess the intended benefits of the program. Without effective performance measures that demonstrate linkage with the program’s goals or mission, have measurable targets, and an established baseline of data, DOD will be unable to effectively demonstrate the benefits of the program and will not have the information needed to ensure that the department is allocating resources to its highest priority efforts. The roles and responsibilities that are currently being implemented for the Armed Forces Sports Program differ from the program’s roles and responsibilities specified in DOD policy. DOD Instruction 1330.04 and the Armed Forces Sports Council’s standard operating procedures specify that the Armed Forces Sports Program includes training or national qualifying events in preparation for participation in International Military Sports Council events, the Pan American Games, the Olympic Games, the Paralympic Games, and other international competitions. While this is how the program is defined in key program documents, the Office of the Secretary of Defense, Sports Council Secretariat, and service officials stated that all responsibilities, including costs, associated with servicemember participation in the Pan American, Olympic, and Paralympic Games are, in practice, handled by the services. According to these officials, the program’s primary objective when it was established was to support the Olympic movement by providing servicemembers the opportunity to compete in the 1948 London Olympic Games. Further, DOD Instruction 1330.04 specifies that the Armed Forces Sports Program includes, among other things, training or national qualifying events in preparation for participation in the Pan American Games, the Olympic Games, and the Paralympic Games. However, officials stated that over time, the services assumed responsibility for their respective servicemembers’ participation in the Pan American, Olympic, and Paralympic Games. The Office of the Secretary of Defense and Sports Council Secretariat officials stated they plan to review DOD Instruction 1330.04 and make necessary updates but did not indicate what specific changes would be made to clarify the program’s roles and responsibilities. Further, these officials stated that they were not sure whether they would remove the Pan American, Olympic, and Paralympic Games from the Armed Forces Sports Council’s standard operating procedures because of the potential for responsibilities to shift again in the future. The Armed Forces Sports Program provides a means by which servicemember athletes can participate in national and international competitions while representing the Armed Forces. However, the program currently does not have performance measures with linkage, measurable targets, or a baseline. Without measures that address the desired outcomes and include these attributes, it will be difficult for DOD and Congress to determine whether the program is meeting the desired goals or benefiting readiness, recruitment, and retention. To improve the management of the Armed Forces Sports Program and better determine whether the program is achieving its desired results, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to develop and implement performance measures for the Armed Forces Sports Program that measure the desired outcomes for the program and, at a minimum, demonstrate linkage to the program’s goals or mission, have a measurable target, and include a baseline that can be used to demonstrate program performance. We provided a draft of this report to DOD and the Department of Homeland Security (DHS) for review and comment. In its comments on a draft of this report, DOD concurred with our recommendation and their comments are reprinted in their entirety in appendix II. DOD and DHS also provided technical comments, which we incorporated into the report as appropriate. DOD concurred with our recommendation to develop and implement performance measures for the Armed Forces Sports Program that measure the desired outcomes for the program and, at a minimum, demonstrate linkage to the program’s goals or mission, have a measurable target, and include a baseline that can be used to demonstrate program performance but also noted potential limitations on establishing measures. Specifically, DOD said that it will explore the development and implementation of performance outcome measures for the Armed Forces Sports Program and that it will review Department of Defense Instruction 1330.04 for potential opportunities to incorporate appropriate guidance regarding performance measures for the Armed Forces Sports Program. However, DOD stated that there are limitations on establishing metrics for several of the program’s objectives, such as goodwill and positive image, which are challenging to measure. Further, DOD said that quantifying outcomes for some objectives, such as the “spirit” of the program, also will be challenging, but that the lack of a performance measurement does not negate the importance of pursuing objectives that contribute to demonstrating the program’s overall effectiveness. In our report, we acknowledged that measurement of the program’s performance may be difficult but also necessary to produce the evidence-based support that is needed to objectively demonstrate how the specific activities that comprise a program are contributing to its effectiveness. Exploring the development and implementation of performance measures and reviewing DOD guidance regarding performance measures are positive steps, but we continue to believe that DOD needs to develop and implement performance measures in order to demonstrate if the Armed Forces Sports Program is being implemented effectively. While it may be challenging to develop performance measures, our prior work has demonstrated that even for highly complex areas such as DOD’s reform of its medical health system and prevention of sexual assault, developing and implementing performance measures can be done, and if implemented correctly, can enhance decision-making. Until DOD does develop and implement performance measures, it will be unable to effectively demonstrate the benefits of the program and will not have the data needed to monitor the program, make decisions about program management and ensure that the department is allocating resources to its highest priority efforts. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of Homeland Security; the Secretaries of the Army, the Navy, and the Air Force; the Commandants of the Marine Corps and the Coast Guard; and the Under Secretary of Defense for Personnel and Readiness. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. To assess the effectiveness of the Department of Defense’s (DOD) implementation of the Armed Forces Sports Program, we reviewed DOD and service (including the Coast Guard) policies and procedures related to the administration of and participation in the program. We interviewed officials from the Office of the Under Secretary of Defense for Personnel and Readiness, the Armed Forces Sports Council Secretariat (“Sports Council Secretariat”), and each service about these policies and procedures. We also discussed the extent to which any performance measures had been established to assess the program’s effectiveness, including any effects of program participation on the services’ readiness. We obtained and analyzed data from DOD on the number of active-duty servicemembers, by service, who had participated in the Armed Forces Sports Program in fiscal years 2012 through 2016 as well as on the number of days servicemembers had spent away from their respective units participating in the program during the same time frame. We also obtained and analyzed data from DOD on the number of DOD and Coast Guard civilians who had supported the Armed Forces Sports Program in fiscal years 2012 through 2016. Further, we obtained and analyzed data from DOD on program costs for fiscal years 2014 through 2016, including the administrative, travel, and salary costs incurred by the Armed Forces Sports Council Secretariat, program-related travel and salary costs for each service, and participation costs of travel participants, which according to program officials include transportation and lodging costs. The time frame of the participant and cost data that we obtained differs because DOD officials stated that fiscal year 2014 was the most recent year that cost data were available from all the services. Based on responses from the Armed Forces Sports Program office to data reliability questionnaires, we determined that the data we obtained were sufficiently reliable for the purposes of this review. We compared DOD’s policy for the program against the federal standards for internal control that state, among other things, that managers should establish activities to monitor performance measures. Additionally, we compared DOD’s participant data—the department’s measure for demonstrating the effectiveness of the Armed Forces Sports Program—with our prior work on performance measurement to determine the extent to which these data exhibit the ten key attributes of successful performance measures. To obtain servicemembers’ perspectives on the Armed Forces Sports Program and its effect on individual readiness, we interviewed 13 randomly selected servicemembers who had participated in the program in calendar year 2015 since, at that time, this was the most recent year for which the program had a complete set of participant data. To understand any effect that a servicemembers’ participation may have had on unit readiness, we also interviewed ten commanding officers who had approved one of the randomly selected servicemembers’ requests to participate in the Armed Forces Sports Program. While the information that we obtained was nongeneralizable, it provided perspectives from individuals with first-hand experience with the Armed Forces Sports Program. We also reviewed DOD and service policies and procedures to identify roles and responsibilities associated with implementing the Armed Forces Sports Program. Further, we interviewed officials within each organization to discuss how designated roles and responsibilities were being implemented. We conducted this performance audit from August 2016 to June 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Kimberly A. Mayo, Assistant Director; Christopher H. Conrad; Mae Frances Jones; Stephanie Moriarty; Shahrzad Nikoo; Shane T. Spencer; Andrew Stavisky; and John W. Van Schaik made key contributions to this report. | For nearly a century, the U.S. Armed Forces (i.e., the Army, the Navy, the Marine Corps, the Air Force, and the Coast Guard) have organized and participated in international and national sporting competitions in part because of the intended benefits for servicemember morale and the unique opportunity that participation provides to foster diplomatic relations. House Report 114-537 accompanying a bill for the National Defense Authorization Act for Fiscal Year 2017 included a provision for GAO to review the Armed Forces Sports Program and its impact on the military services' readiness. This report assesses the effectiveness of DOD's implementation of the Armed Forces Sports Program. GAO analyzed participation data for fiscal years 2012 through 2016 and cost data for fiscal years 2014 through 2016, compared DOD data with attributes of successful performance measures, compared roles and responsibilities specified in policy with those being implemented, and interviewed DOD officials. The Department of Defense (DOD) has data on participation in and costs of the Armed Forces Sports Program, but has not taken steps, including developing performance measures and clarifying roles and responsibilities that are needed to help ensure the program is implemented effectively. DOD officials stated that they use sport and competition participation data to measure the performance and effectiveness of the program. According to these data, servicemember participation changed from 968 servicemembers in fiscal year 2012 to 848 servicemembers in fiscal year 2016, and program costs ranged from about $2.1 million to about $2.8 million in fiscal years 2014 through 2016. While these data provide important context about the program's size and reach, they do not exhibit several key attributes, such as linkage, a measurable target, and baseline and trend data that GAO has found are key to successfully measuring a program's performance. First, these data do not exhibit linkage because no relationship has been established to show how the number of servicemember participants contribute to achievement of the program's objectives, such as promoting goodwill among and a positive image of the U.S. Armed Forces through sports. Second, these data were not associated with a measurable target that would enable program officials to determine how far the program has progressed toward a desired outcome or end state. for measures that are able to assess the program's performance and progress over time. Without performance measures that demonstrate these attributes, DOD will be unable to effectively demonstrate that it is achieving the intended benefits of the program, such as improving readiness, recruitment, and retention as well as promoting the goodwill of the U.S. Armed Forces. Officials cited the program as aiding recruiting because it showcased unique opportunities open to those in the U.S. Armed Forces. However, outside of participation and cost data and some anecdotal examples, officials did not have specific measures for or data on the Armed Forces Sports Program's contribution to the services' readiness, recruiting, and retention efforts. The roles and responsibilities that are currently being implemented for the program differ from the program's roles and responsibilities specified in DOD policy. DOD Instruction 1330.04 specifies that the program includes training or national qualifying events in preparation for participation in International Military Sports Council events, the Pan American Games, the Olympic Games, the Paralympic Games, and other international competitions. While this is how the program is defined in key program documents, DOD officials stated that all responsibilities, including costs, associated with servicemember participation in the Pan American, Olympic, and Paralympic Games are handled by the services. DOD officials stated that they plan to review DOD Instruction 1330.04 and make necessary updates, but have not yet determined what specific changes would be made to clarify the program's roles and responsibilities. GAO recommends that DOD develop and implement performance measures for the Armed Forces Sports Program that, at a minimum, demonstrate linkage to the program's goals or mission, have a measurable target, and include a baseline that can be used to demonstrate program performance. DOD concurred with the recommendation, noting potential limitations on establishing measures. GAO acknowledges these limitations, but continues to believe that measures are important to evaluating the program's effectiveness. |
SBA depends on its IT environment to support the management of its programs. This environment includes 42 mission-critical systems running on legacy mainframes and minicomputers. Ten of these systems support administrative activities; the remaining 32 support loan activities, including loan accounting and collection, loan origination and disbursement, and loan servicing and debt collection. According to SBA’s self-assessment of its IT environment, the legacy systems are not effectively integrated and thus provide limited information sharing. The assessment also showed that SBA cannot depend on the systems to provide consistent information. Because of these problems, it has embarked on an agencywide systems modernization initiative to replace its outmoded legacy systems. Our May report presented the results of our evaluation of SBA’s management of IT in the areas of investment management, architecture, software development and acquisition, information security, and human capital. These five areas encompass major IT functions and are widely recognized as having substantial influence over the effectiveness of operations. Blank Circle indicates that policies and procedures do not exist or are substantially obsolete or incomplete; and practices for planning, monitoring and evaluation are predominantly ad hoc, or not performed. Half Circle indicates that policies and procedures are predominantly current and facilitate key functions; and selected key practices for planning, monitoring, and evaluation have been implemented. Solid Circle indicates that policies and procedures are current and comprehensive for key functions; and practices for planning, monitoring, and evaluation adhere to policies, procedures, and generally accepted standards. Properly implemented, IT investment management is an integrated approach that provides for the life-cycle management of IT projects. This investment process requires three essential phases: selection, control, and evaluation. In the selection phase, the organization determines priorities and makes decisions about which projects will be funded based on their technical soundness, contribution to mission needs, performance improvement priorities, and overall IT funding levels. In the control phase, all projects are consistently controlled and managed. The evaluation phase compares actual performance against estimates to identify and assess areas in which future decision-making can be improved. Our assessments of SBA’s investment management processes disclosed that policies and procedures were substantially incomplete; and practices were predominately ad hoc or not performed for most of the critical activities, as shown in figure 1. SBA had made progress in establishing an investment review board and is beginning to define an investment selection process. However, it had not yet established IT investment management policies and procedures to help identify and select projects that will provide mission-focused benefits and maximum risk-adjusted returns. Likewise, SBA had not yet defined processes for investment control and evaluation to ensure that selected IT projects will be developed on time, within budget, and according to requirements, and that these projects will generate expected benefits. The agency had performed only limited reviews of major IT investments, and these reviews were ad-hoc since little data had been captured for analyzing benefits and returns on investment. Without established policies and defined processes for IT investment, SBA cannot ensure that consistent selection criteria are used to compare costs and benefits across proposals, that projects are monitored and provided with adequate management oversight, or that completed projects are evaluated to determine overall organizational performance improvement. In addition, the agency lacks assurance that the collective results of post- implementation reviews across completed projects will be used to modify and improve investment management based on lessons learned. To address IT investment management weaknesses, SBA planned to develop and implement an investment selection process that includes screening, scoring, and ranking proposals. It also planned to use its target architecture to guide IT investments. In addition, SBA planned to develop and implement an investment control process to oversee and control projects on a quarterly basis. As part of investment control, SBA intended to collect additional data from all investment projects and compare actual data with estimates in order to assess project performance. SBA’s plans indicate a strong commitment to making improvements in this area; however, to establish robust IT investment management processes, additional actions are needed. Accordingly, we recommended that the SBA Administrator direct the chief information officer to establish policies and procedures and define and implement processes to ensure that (1) IT projects are selected that result in mission-focused benefits, maximizing risk-adjusted return-on-investment; (2) projects are controlled to determine if they are being developed on time, within budget, and according to requirements; and (3) projects are evaluated to ascertain whether completed projects are generating expected benefits. An IT architecture is a blueprint—consisting of logical and technical components—to guide the development and evolution of a collection of related systems. At the logical level, the architecture provides a high-level description of an organization’s mission, the business functions being performed and the relationships among the functions, the information needed to perform the functions, and the flow of information among functions. At the technical level, it provides the rules and standards needed to ensure that interrelated systems are built to be interoperable and maintainable. Our assessments of SBA’s information architecture disclosed that SBA had drafted policies and procedures for key activity areas except for change management, and had drafted architecture components except for change management, as reflected in figure 2. SBA had made progress with its target IT architecture by describing its core business processes, analyzing information used in its business processes, describing data maintenance and data usage, identifying standards that support information transfer and processing, and establishing guidelines for migrating current applications to the planned environment. However, procedures did not exist for change management to ensure that new systems installations and software changes would be compatible with other systems and SBA’s planned operating environment. Without established policies and systematic processes for IT architecture activities, SBA cannot ensure that it will develop and maintain an information architecture that will effectively guide efforts to migrate systems and make them interoperable to meet current and future information processing needs. To address IT architecture weaknesses, SBA planned to establish a change management process for architecture maintenance, to ensure that new systems installations and software changes will be compatible with other systems and with SBA’s planned operating environment. In addition, it planned to incorporate in the target architecture specific security standards for hardware, software, and communications. To ensure that these planned improvements are completed and sound practices institutionalized, we recommended that the SBA Administrator direct the chief information officer to establish policies and procedures and define and implement processes to ensure that (1) the architecture is developed using a systematic process so that it meets the agency’s current and future needs and (2) the architecture is maintained so that new systems and software changes are compatible with other systems and SBA’s planned operating environment. To provide the software needed to support mission operations, an organization can develop software using its staff or acquire software products and services through contractors. Key processes for software development include requirements management, project planning, project tracking and oversight, quality assurance, and configuration management. Additional key processes needed for software acquisition include acquisition planning, solicitation, contract tracking and oversight, product evaluation, and transition to support. Our assessment of SBA’s software development and acquisition processes disclosed that SBA had not established policies, its procedures were obsolete, and its practices were predominantly ad hoc for one or more critical activities, as shown in figure 3. SBA lacked policies for software development and acquisition to help produce information systems within the cost, budget, and schedule goals set during the investment management process that at the same time comply with the guidance and standards of its IT architecture. SBA’s IT guidance and procedures were obsolete and thus rarely used for acquisition planning, solicitation, contract tracking and oversight, product evaluation, and transition to support. An existing systems development methodology was being adopted, however, to replace outdated guidelines that lacked key processes for software development. Our review of the selected software projects indicated that SBA’s practices were typically ad hoc for project planning, project tracking and oversight, quality assurance, and configuration management. Without established policies and defined processes for software development and acquisition, practices will likely remain ad hoc and not adhere to generally accepted standards. Key activities—such as requirements management, planning, configuration management, and quality assurance—will be inconsistently performed or not performed at all when project managers are faced with time constraints or limited funding. These weaknesses can delay delivery of software products and services and lead to cost overruns. To address software development and acquisition weaknesses, SBA planned to implement formal practices, such as software requirements management and configuration management, on a project basis before establishing them agencywide. Specifically, SBA had selected the Loan Monitoring System (LMS) project as a starting point for identifying, developing, and implementing a new systems development methodology and associated policies, procedures, and practices. LMS therefore will serve as a model for future systems development projects. While SBA’s plan is a good first step, additional measures need to be taken to ensure agencywide improvements. To establish sound IT software development and acquisition processes, we recommended that the SBA Administrator direct the chief information officer to complete the systems development methodology and develop a plan to institutionalize and enforce its use; and develop a mechanism to enforce the use of newly- established policies in areas including but not limited to requirements management, project planning/tracking/oversight, quality assurance, configuration management, solicitation, contract oversight, and product evaluation. Information security policies address the need to protect an organization’s computer-supported resources and assets. Such protection ensures the integrity, appropriate confidentiality, and availability of an organization’s data and systems. Key information security activities include risk assessment, awareness, controls, evaluation, and central management. Risk assessments consist of identifying threats and vulnerabilities to information assets and operational capabilities, ranking risk exposures, and identifying cost- effective controls. Awareness involves promoting knowledge of security risks and educating users about security policies, procedures, and responsibilities. Evaluation addresses monitoring the effectiveness of controls and awareness activities through periodic evaluations. Central management involves coordinating security activities through a centralized group. Our assessments of information security at SBA disclosed that policies and procedures did not exist for risk assessments and were in draft form for other key activities; and that practices were not performed for one critical activity, as shown in figure 4. SBA had not conducted periodic risk assessments for its mission-critical systems; the agency had only recently conducted a security workload assessment and a risk assessment for one system. Training and education had not been provided to promote security awareness and responsibilities of employees and contract staff. Further, security management responsibilities were fragmented among all of SBA’s field and program offices. SBA’s computer security procedures for systems certification and accreditation were in draft form. Without security policies, SBA faces increased risk that critical information and assets may not be protected from inappropriate use, alteration, or disclosure. Without defined procedures, practices are likely to be inconsistent for such activities as periodic risk assessments, awareness training, implementation and effectiveness of controls, and evaluation of policy compliance. To address information security weaknesses, SBA has hired additional staff to develop procedures to implement computer security policies and to manage computer accounts and user passwords. These staff are also responsible for performing systems security certification reviews of new and existing IT systems. In addition, SBA planned to finish development and testing of a comprehensive disaster recovery and business continuity plan. To build on the actions taken and planned by SBA and ensure that a comprehensive, effective security program is established, we recommended that the SBA Administrator direct the chief information officer to establish policies and procedures and define and implement processes to ensure that periodic risk assessments are conducted to determine and rank an effective security awareness program is implemented; policies and procedures are updated, with new controls implemented to address newly discovered threats; the development and testing of SBA’s comprehensive disaster recovery and business continuity plan is completed, then periodically tested and updated; security evaluations are conducted to ascertain whether protocols in place are sufficient to guard against identified vulnerabilities, and if not, remedial action taken as needed; and a centralized mechanism is developed to monitor and enforce compliance by employees, contract personnel, and program offices. The concept of human capital centers on viewing people as assets whose value to an organization can be enhanced through investment. To maintain and enhance the capabilities of IT staff, an agency should conduct four basic activities: (1) assess the knowledge and skills needed to effectively perform IT operations to support the agency’s mission and goals; (2) inventory the knowledge and skills of current IT staff to identify gaps in needed capabilities; (3) develop strategies and implementation plans for hiring, training, and professional development to fill the gap between requirements and current staffing; and (4) evaluate progress made in improving IT human capital capability, using the results of these evaluations to continuously improve the organization’s human capital strategies. Our assessments of SBA’s human capital processes disclosed that policies and procedures did not exist and that SBA was not performing critical activities, as shown in figure 5. SBA had not established policies and procedures to identify and address its short- and long-term requirements for IT knowledge and skills. Similarly, it had not conducted an agencywide assessment to determine gaps in IT knowledge and skills in order to develop workforce strategies and implementation plans. Further, SBA had not evaluated its progress in improving IT human capital capabilities or used data to continuously improve human capital strategies. Without established policies and procedures for human capital management, SBA lacks assurance that it is adequately identifying the IT knowledge and skills it needs to support its mission, is developing appropriate workforce strategies, or is effectively planning to hire and train staff to efficiently perform IT operations. To address IT human capital management weaknesses, SBA planned to conduct a comprehensive assessment of training needs with a special emphasis on the needs of its IT staff. The survey is scheduled for fiscal year 2001 and will be conducted at both headquarters and SBA field offices. While SBA’s planned assessment should be useful, a more comprehensive program is needed to ensure that it hires, develops, and retains the people it needs to effectively carry out IT activities. To improve IT human capital management practices, we recommended that the SBA Administrator direct the chief information officer to establish policies and procedures and define and implement processes to ensure that SBA’s IT and knowledge skills requirements are identified; periodic IT staff assessments are performed to identify current knowledge levels; workforce strategies are developed and plans implemented to acquire and maintain the necessary IT skills to support the agency mission; and SBA’s human capital capabilities are periodically evaluated and the results used to continually improve agency strategies. In summary, for SBA to enhance its ability to carry out its mission, it will require solid IT solutions to help it identify and address operational problems. However, many of SBA’s policies and procedures for managing IT have either not been developed or were in draft form, and its practices generally did not adhere to defined processes. While the agency plans to improve its processes, additional actions are needed in each key IT process area to institutionalize agencywide industry standard and best practices for planning, monitoring, and evaluation of IT activities. SBA has agreed with all of our recommendations and has stated that efforts are underway to address them. SBA has also emphasized that it is committed to improving IT management practices. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the Committee may have at this time. For information about this testimony, please contact Joel C. Willemssen at (202) 512-6253 or by e-mail at [email protected]. Individuals making key contributions to this testimony included William G. Barrick, Michael P. Fruitman, James R. Hamilton, and Anh Q. Le. (511850) | Pursuant to a congressional request, GAO discussed the Small Business Administration's (SBA) management of information technology (IT), focusing on five key areas: (1) investment management; (2) architecture; (3) software development and acquisition; (4) information security; and (5) human capital management. GAO noted that: (1) SBA had made progress in establishing an investment review board and is beginning to define an investment selection process; (2) however, it had not yet established IT investment management policies and procedures to help identify and select projects that will provide mission-focused benefits and maximum risk-adjusted returns; (3) likewise, SBA had not yet defined processes for investment control and evaluation to ensure that selected IT projects will be developed on time, within budget, and according to requirements, and that these projects will generate expected benefits; (4) the agency had performed only limited reviews of major IT investments, and these reviews were ad-hoc since little data had been captured for analyzing benefits and returns on investment; (5) SBA had made progress with its target IT architecture by describing its core business processes, analyzing information used in its business processes, describing data maintenance and data usage, identifying standards that support information transfer and processing, and establishing guidelines for migrating current applications to the planned environment; (6) however, procedures did not exist for change management to ensure that new systems installations and software changes would be compatible with other systems and SBA's planned operating environment; (7) SBA lacked policies for software development and acquisition to help produce information systems within the cost, budget, and schedule goals set during the investment management process that at the same time comply with the guidance and standards of its IT architecture; (8) an existing systems development methodology was being adopted to replace outdated guidelines that lacked key processes for software development; (9) GAO's review of the selected software projects indicated that SBA's practices were typically ad-hoc for project planning, project tracking and oversight, quality assurance, and configuration management; (10) SBA had not conducted periodic risk assessments for its mission-critical systems; (11) the agency had only recently conducted a security workload assessment and a risk assessment for one system; (12) training and education had not been provided to promote security awareness and responsibilities of employees and contract staff; (13) SBA had not established policies and procedures to identify and address its short- and long-term requirements for IT knowledge and skills; and (14) further, SBA had not evaluated its progress in improving IT human capital capabilities or used data to continuously improve human capital strategies. |
The federal government funds multiple programs that subsidize housing construction and rehabilitation, assist homebuyers and renters, and provide assistance to state and local governments through a variety of spending and loan programs, tax expenditures, regulatory requirements, and other activities aimed at promoting housing. Federal housing assistance generally was created in response to the Great Depression. However, the largest current activity (in terms of forgone revenue) associated with homeownership—the mortgage interest deduction—was introduced in 1913, when the federal income tax was enacted. Further assistance was created in the 1930s, when most rural residents worked on farms and rural areas generally were poorer than urban areas. Accordingly, Congress authorized separate housing assistance for rural areas and made USDA responsible for administering it. Specifically, in 1937 the Bankhead-Jones Farm Tenant Act authorized USDA to provide long-term, low-interest loans to farm tenants and sharecroppers so that they could purchase and repair farms, including homes on farms. The Housing Act of 1949 authorized new rural lending programs through USDA and made farm owners eligible for assistance for dwellings and other farm buildings if the farm was located on land capable of producing at least $400 worth of agricultural commodities annually. Amendments added in 1961 made nonfarm properties eligible for single-family loans and created the farm labor housing program. A 1962 amendment created the rural rental housing program.Housing Administration (FHA) began providing mortgage insurance in 1934, and the first public housing program was authorized in 1937. FHA became part of HUD when HUD was created in 1965. In 2012, HUD and Treasury administer some of the largest programs, with USDA and VA providing specific assistance to rural communities and veterans. In addition, the government-sponsored enterprises—Fannie Mae and Freddie Mac—have supported the mortgage market by helping to create a secondary market for mortgage loans. Financial regulators are responsible for ensuring that regulated institutions comply with consumer financial protections or otherwise serve the communities in which they operate. Federal housing assistance generally can be categorized as follows: Homeownership programs, often called single-family housing programs, provide mortgage insurance, loan guarantees, direct loans for homeowners, and grants or loans for home repairs or modifications. Rental housing programs, often called multifamily programs, provide loans, interest rate subsidies, loan guarantees, tax incentives, or a combination of these to promote the development and rehabilitation of privately owned rental properties. Rental assistance programs, which make rents affordable to eligible households by paying the difference between the unit’s rent and 30 percent of a household’s adjusted income. These programs include (1) tenant-based rental assistance that provides vouchers for eligible tenants to rent privately owned apartments or single-family homes and can be applied to different properties if tenants move; and (2) project-based rental assistance that is attached to specific properties and available to tenants only when they are living in units at these properties. Public housing offers units for eligible tenants in properties owned and administered by public housing authorities. Tax expenditures, such as exclusions, exemptions, deductions (including the mortgage interest deduction), credits, deferrals, and preferential rates, can promote homeownership or the development of privately owned rental housing through the federal tax code. The federal government uses varying income thresholds for different housing programs to identify target populations or set eligibility requirements. Although some federal housing programs do not have specific income eligibility requirements, such as VA’s Home Loan Guaranty program for veterans, many of HUD’s and USDA’s multifamily programs and Treasury’s Low-Income Housing Tax Credit (LIHTC) program have specific income eligibility requirements. The most common income thresholds used for the programs are: very low-income—no more than 50 percent of the area’s median income (AMI); low-income—no more than 80 percent of AMI; and moderate-income—no more than 115 percent of AMI. Fragmentation refers to those circumstances in which more than one federal agency (or more than one organization in an agency) is involved in the same broad area of national interest. Overlap occurs when programs have similar goals, devise similar strategies and activities to achieve those goals, or target similar users. Duplication occurs when two or more agencies or programs engage in the same activities or provide the same services to the same beneficiaries. In some instances, it may be appropriate for multiple agencies or entities to be involved in the same programmatic or policy area due to the nature or magnitude of the federal effort. Twenty different entities administered 160 programs, tax expenditures, and other tools that supported homeownership and rental housing in fiscal year 2010, reflecting the fragmentation in federal housing delivery. See e-supplement (GAO-12-555SP) for the list of programs, tax expenditures, other tools, and their related budgetary information. We identified 11 primary purposes for the activities (see fig. 1). Of the 11 purposes (categories) identified, 3 generally relate to support for homeownership (including purchasing a home), 4 to support for rental housing and tenants, and 4 to both. Within each category, multiple agencies administer programs that serve the same purpose, illustrating the fragmentation of homeownership and rental housing programs. The most commonly identified purpose was assistance for buying, selling, or financing a home. This category includes single-family mortgage programs that provide mortgage insurance or guarantees administered through HUD, USDA, and VA. As we have noted, the mortgage interest deduction represented the single largest activity—in terms of annual forgone revenue—associated with homeownership. In fiscal year 2010, the estimated revenue loss for the mortgage interest deduction was almost $80 billion. The most widely used housing tax expenditure, in terms of number of participants, was the deduction for state and local property taxes.nearly 40 million taxpayers claimed this deduction on their 2009 returns. In the secondary market, originators of mortgage loans package them into securities and sell the securities to investors. federally guaranteed mortgages issued by certain mortgage lenders. In turn, these lenders—generally banks and thrifts—use the proceeds to originate additional mortgages. The federal government increased its support for homeownership in response to the current housing crisis, providing emergency assistance or other extraordinary support to the housing market and homeowners through a number of initiatives. Most of the activities we identified as emergency assistance were intended to support homeownership. For example, Treasury and the Federal Reserve System purchased mortgage-backed securities issued by Fannie Mae and Freddie Mac to help support the availability of mortgage credit for prospective homebuyers or homeowners wishing to refinance. And, HUD and Treasury administer programs that provide financing assistance to struggling homeowners—such as the Making Home Affordable program, which reduces borrowers’ monthly mortgage payments. Regulatory requirements also support homeownership by establishing standards for residential mortgage lending, among other things. For example, the Real Estate Settlement Procedures Act of 1974 requires lenders to disclose mortgage closing documents to homebuyers and sellers. Financial Protection Bureau, or CFPB), the federal financial regulators, and HUD are involved in the examination and enforcement of this and other regulatory requirements. The Federal Financial Institutions Examination Council, a formal interagency body, also plays a role by prescribing uniform principles, standards, and report forms for the federal examination of financial institutions. Real Estate Settlement Procedures Act of 1974, 12 U.S.C. §§ 2601-2617. produce and preserve affordable rental housing. Some of HUD’s multifamily loan guarantee programs are estimated to produce more revenue than expenditures. These estimates are reviewed annually, and because the underlying loans may have terms of up to 40 years, their ultimate cost is uncertain. Further, HUD and USDA have multiple programs that support low-income households by providing assistance to rental property owners to cover all or a portion of the tenant’s rent. Finally, many activities support homeownership and rental housing both directly and indirectly. For example, HUD administers multiple block grant programs that provide state and local governments with flexible funding to address community development needs, including support of homeownership or rental housing. And one regulatory requirement—the Community Reinvestment Act (CRA)—supports the financing of homeownership and the creation of affordable rental housing, among other things. Another regulatory requirement, the Fair Housing Act, protects homebuyers and renters from discrimination. To identify housing programs that had a potential for overlap, we used findings from our prior work that examined programs that offer similar housing services to similar beneficiaries. These programs included selected HUD and USDA single- and multifamily programs, VA’s single- family housing loan guarantee program, and Treasury’s LIHTC. We also included Treasury’s mortgage interest and property tax deductions because they are the largest programs in terms of overall funding. We compared agency goals, products offered, geographic areas served, service delivery, and, for single-family programs, recipients’ income levels. Evidence of overlap existed across many of these dimensions for the single-family products offered by HUD, USDA, VA, and Treasury, but important differences also existed. Although selected HUD, USDA, and Treasury multifamily housing programs had overlapping purposes, the products, areas served, and delivery methods differed to varying degrees. Seven single-family programs administered by HUD, USDA, VA, and Treasury overlap in their broad purpose of supporting homeownership, but only HUD has an explicit housing priority and strategic goal (see table 1). Federal agencies outline long-term goals and objectives in their strategic plans and annual goals in their performance plans. In addition, in the fiscal year 2013 President’s Budget, agencies identified a limited number of 2-year agency priority goals that align with the long-term goals and objectives in their strategic plans. Agency priority goals target areas in which agencies want to achieve near-term performance through focused attention of senior leadership. HUD included the prevention of foreclosures as a priority goal. As of April 2012, USDA, VA, and Treasury had not highlighted homeownership among their agency priority goals. Under its strategic goal to strengthen the nation’s housing market, HUD uses its single-family guaranteed loan program to meet its subgoal of creating financially sustainable homeownership opportunities. Under its broad strategic goal of assisting rural communities, USDA uses its single- family loan and grant programs to increase the number of homeownership opportunities available in rural areas. VA’s guarantee for home mortgages is one among many entitlements that veterans earn. The program falls under VA’s broad strategic goal of improving the quality and accessibility of health care, benefits, and memorial services while optimizing value. Finally, although the mortgage interest and property tax deductions are the two tax expenditures most widely used by homeowners, Treasury does not have stated goals for these, or most other, tax expenditures. However, these tax expenditures are generally recognized as reducing the after-tax costs of financing and maintaining a home. The selected single-family guaranteed loan programs of HUD (FHA), USDA’s Rural Housing Service (RHS), and VA overlap, but differences exist among the products, and only USDA offers certain direct loans to finance the purchase of homes for low- and very low-income families (see FHA, VA, and RHS guarantee 30-year fixed-rate mortgages table 2).requiring little or no down payment from borrowers and charge up-front fees that generally vary from 1.75 to 2.15 percent. The government guarantee makes all of these loans eligible for inclusion in Ginnie Mae- guaranteed mortgage-backed securities. All the products require that the borrowers occupy the home and permit borrowers to use loan proceeds to purchase a home or refinance an existing loan. The products also have some important distinctions. For example, VA loan guarantees are an entitlement available only to veterans who have served in a branch of the armed services and received an honorable discharge, certain currently serving members of the Reserves or National Guard, and spouses of veterans under certain circumstances. RHS loan guarantees are limited by income and geography. FHA requires at least a 3.5-percent down payment, while RHS and VA require none. Additionally, loan guarantee programs vary in the extent to which the agencies cover potential losses of the lender or other mortgage holder. FHA provides 100-percent coverage of eligible losses when borrowers default. This guarantee covers the unpaid principal balance, interest costs, and certain costs of foreclosure and conveyance. USDA’s guarantee provides coverage for eligible losses of up to 90 percent of the original principal, including unpaid principal and interest; principal and interest on USDA-approved advances for protection and preservation of the property; and the costs associated with selling the foreclosed property. One of the most significant differences among these products is the loss coverage offered by VA, which covers from 25 to 50 percent of the original principal. Of the agencies, only USDA (RHS) offers single-family housing programs specifically for low- and very low-income families. RHS offers two unique products: a subsidized direct loan for the purchase of single-family housing, with interest rates as low as 1 percent, to low-income borrowers unable to qualify for credit elsewhere, and a home repair program that offers grants or loans (with interest rates of 1 percent) to very low-income rural residents. RHS may subsidize the interest on single-family direct loans, depending on the borrower’s income. As shown in table 1, two of the largest tax expenditures that provide assistance to homeowners are the mortgage interest and property tax deductions, with about half of all homeowners receiving housing assistance through them. Taxpayers who itemize their deductions may deduct mortgage interest and property taxes on their principal residence and a second residence. Deductions are adjustments from adjusted gross income (AGI). Whether or not a taxpayer itemizes deductions depends on whether the sum of these deductions plus any other itemized deductions exceeds the limits on the standard deduction. Taxpayers are subject to certain limits on the total amount of mortgage interest that can be claimed. The total amount of mortgage debt for which interest may be claimed cannot exceed $1 million. In addition, taxpayers may deduct interest payments on up to $100,000 of home equity debt. There are no dollar limits on the amounts of property taxes that can be deducted. Taxpayers with higher incomes are subject to additional limitations on use of these two deductions. Data from selected single-family programs show some overlap in the income and location of households served. Among the single-family loan guarantee programs, all served moderate- and low-income populations, although only USDA’s program restricts eligibility on the basis of income. USDA limits borrower income to 115 percent of AMI. Although FHA and VA do not have this restriction, 1,291,000 or 74 percent of FHA borrowers and at least 130,000 VA borrowers also fell into this income category in fiscal year 2009 (see fig. 2). In part because of the number of borrowers it serves, FHA guaranteed more loans to borrowers with incomes at or below 115 percent of AMI than RHS and VA combined. Further, although RHS may serve only borrowers with incomes at or below 80 percent of AMI in its direct single-family loan program, FHA also serves this group of borrowers. Specifically, 50 percent of FHA borrowers in fiscal year 2009 had incomes at or below 80 percent of AMI. However, RHS single-family direct loans may be combined with other resources to help reach very low-income families that may not have the income or down payment often needed to qualify for other financing. For example, these loans may be used in self-help housing projects in which future owners help build their own houses. The “sweat equity” reduces the cost of construction and the overall loan amount. The loan guarantee programs overlapped in rural areas. USDA characterizes locations as rural or urban using different measures. Our analysis showed overlap in areas served using three different USDA characterizations for geographic areas. Section 520 of the Housing Act of 1949, as amended, defines the terms “rural” and “rural area” for the rural housing programs that are the focus of this report. The definition is largely based on population, but also considers other factors, such as proximity to metropolitan areas and access to mortgage credit. As of 2011, 97 percent of the land area of the United States and 37 percent of the population were eligible for rural housing programs (see fig. 3). Eligible areas will be adjusted based on the results of the 2010 Census. As we reported in 2004, the definition can lead to inconsistent eligibility determinations. Although RHS offers its single-family products only in eligible rural areas, and FHA and VA programs are not restricted to any geographic location, FHA and VA also guaranteed a substantial number of loans in RHS- eligible areas. While a larger percentage of RHS borrowers with guaranteed loans were located in more remote rural areas compared with FHA and VA borrowers, FHA served a larger number of borrowers in these areas. Table 3 characterizes the location of single-family guaranteed loans relative to their distance from the boundaries separating RHS-eligible and -ineligible areas. For example, 50 percent of the RHS single-family guaranteed loans were located inside or within 10 miles of ineligible areas and 23 percent were located more than 25 miles from ineligible areas. FHA and VA loans were concentrated in or close to RHS- ineligible areas; 89 percent of both FHA and VA loans were for properties inside or within 10 miles of RHS-ineligible areas, and 4 percent were for properties located more than 25 miles from RHS-ineligible areas. FHA guaranteed more loans than RHS in all location categories, including more than twice as many loans as RHS in areas more than 50 miles from RHS-ineligible areas. USDA’s Economic Research Service categorizes zip codes by degree of rurality into four types—urban, suburban, small-town rural, and isolated rural. While 89 percent of FHA’s and 86 percent of VA’s single-family loan guarantees were in urban or suburban zip codes, both agencies also guaranteed a substantial number of single-family loans in rural zip codes (see table 4). FHA guaranteed more loans than RHS in all zip code types. Further, FHA guaranteed more than 210,000 loans in rural zip codes, while RHS guaranteed 59,000 loans, or about half of its loans, in rural zip codes. Although a greater percentage of the RHS-guaranteed loans were in rural zip codes compared with FHA and VA loans, more than half the RHS-guaranteed loans were in urban and suburban zip codes. The Economic Research Service also has developed a rural-urban continuum that categorizes U.S. counties by degree of rurality.this continuum, we distinguished four types of counties—metropolitan, urbanized nonmetropolitan, rural nonmetropolitan, and completely rural nonmetropolitan. FHA and VA also guaranteed a substantial number of loans in nonmetropolitan or what could be considered more rural counties (see fig. 4.) Additionally, FHA guaranteed more loans than RHS in both metropolitan and completely rural nonmetropolitan counties. Specifically, FHA guaranteed a greater number of loans than RHS in all the nonmetropolitan categories. And although a greater percentage of RHS- guaranteed loans were in nonmetropolitan counties compared with FHA and VA loans, more than half of its loans were in metropolitan counties. HUD, USDA, VA, and Treasury have collaborated on efforts in their housing programs, but opportunities exist to improve collaboration and effectiveness. Specifically, the Single Family Housing Task Force has not yet developed a formal approach to help guide its collaboration efforts. And, although the Rental Policy Working Group has followed best practices and increased collaboration on selected multifamily rental housing programs, its efforts have not been as effective as possible. Consolidation or increased coordination of some programs and activities could be beneficial, but also entails significant challenges and implications that we discuss below. As of April 2012, a number of federal efforts to coordinate housing programs were at various stages of implementation, including a task force established to evaluate the potential for coordinating or consolidating single-family loan programs. Overall, the task force’s efforts have not incorporated key principles on effective collaboration. In February 2011, the Administration reported to Congress that it would establish a task force to evaluate the potential for coordinating or consolidating the single- The members of the task family loan programs at HUD, USDA, and VA.force include senior-level officials from each of the three agencies and OMB officials. According to the officials, besides naming members, no dedicated funding or other resources had been devoted to the task force as of April 2012. We have reported that federal agencies often face a range of barriers when they attempt to collaborate with other agencies, including missions and goals that are not mutually reinforcing, concerns about controlling jurisdiction over missions and resources, and incompatible procedures, processes, data, and computer systems. In an October 2005 report, we identified eight key practices that can help enhance and sustain collaboration among federal agencies. The key practices are (1) define and articulate a common outcome; (2) establish mutually reinforcing or joint strategies; (3) identify and address needs by leveraging resources; (4) agree on agency roles and responsibilities; (5) establish compatible policies; (6) develop mechanisms to monitor, evaluate, and report on results; (7) reinforce agency accountability for collaborative efforts through agency plans and reports; and (8) reinforce individual accountability for collaborative efforts through performance management systems. While these practices can facilitate greater collaboration, we recognize that other practices also may do so. Furthermore, the specific ways in which agencies implement these practices may differ in the context of the specific collaboration challenges agencies face. For example, joint activities can range from occasional meetings between employees in which the roles and responsibilities of the respective agencies are reaffirmed, to more structured task teams operating over a period of time. But absent effective collaboration, routine interagency meetings could result in limited information being communicated and few joint agreements reached or implemented. In comparing the single-family task force’s efforts with key collaboration practices, we found that the agencies have not taken steps that are consistent with the practices. For example, other than the announcement of the task force, member agencies said that they had yet to identify goals or expected outcomes, and could not provide strategies each agency might utilize. The task force can benefit from identifying and agreeing on goals, and evaluating the goals against realistic expectations of how to achieve them. The task force also has not yet identified resources needed to accomplish its goals; agreed on roles or responsibilities; taken steps to establish compatible policies, procedures or other means to operate; or made clear how they would be made accountable for collaborative efforts and report on results. In addition to our key practices, the Government Performance and Results Act Modernization Act of 2010 (GPRAMA), establishes a new framework for agencies to improve government performance by taking a more crosscutting and integrated approach to key issues. GPRAMA requirements could lead to improved coordination and collaboration among agencies. For instance, GPRAMA requires each agency to identify the organizations and program activities—both internal and external to the agency—that contribute to each agency’s goals. However, according to HUD and USDA officials, much of the single-family task force’s efforts to date have been informal. For instance, the officials noted that senior agency officials met biweekly in teleconferences to share information and best practices on housing policy and programs and discussed current economic issues affecting the housing market and ways to streamline the housing programs in a coordinated manner. According to HUD and OMB officials, aside from the biweekly meetings, a benchmarking effort associated with the single-family task force recently was established. Specifically, OMB will collect and analyze data on direct and guaranteed housing loan programs as a way to develop greater insight into best practices, potential overlap, and synergies among the housing programs. According to HUD, as of April 2012, no milestones or resource estimates were available for the task force and no results were expected until a more formal approach for the task force was established. Additionally, agency officials stated that no further collaborative efforts among single-family housing programs were planned. OMB and HUD officials stated that over the past few years, agency attention has been focused on trying to improve the overall condition of the housing market, making it difficult to turn attention to interagency efforts for program coordination or consolidation. HUD officials also noted that the ongoing housing crisis has been a complicating factor in addressing the broader issue of housing finance reform, and mostly has overshadowed the issue. Nonetheless, in addition to focusing on the ongoing housing crisis and the level of government support for the housing market, it is also important to focus some attention on the way that government support for housing is delivered and strike the appropriate balance between these issues. The task force was established to explore ways in which programs can be better coordinated or consolidated to serve homeowners more effectively. Part of that analysis is the assessment of coordination and consolidation of HUD, USDA, and VA programs. By incorporating key practices on collaboration and developing a more formal approach for the single-family task force, HUD, USDA, VA, and OMB can evaluate the potential for coordinating or consolidating single-family loan programs, and possibly generate savings and efficiencies while better serving homeowners. They also may be able to help drive further collaboration, establish complementary goals and strategies for achieving results, and increase transparency (by reporting on their collaborative efforts). As the task force moves forward, developing a formal approach for the task force’s collaborative efforts could help the agencies establish the guidance and direction needed to systematically bring about a productive working relationship and further help improve single-family loan programs. HUD, USDA, and Treasury officials have been working to align the requirements of some multifamily housing programs through the Rental Policy Working Group. Although the efforts of the working group have been consistent with a majority of our key practices, the group has yet to take additional steps to reinforce agency accountability for collaborative efforts. In response to the need for better-coordinated multifamily housing policy, in July 2010 the White House’s Domestic Policy Council established the interagency Rental Policy Working Group. The working group consists of the White House Domestic Policy Council, National Economic Council, OMB, HUD, USDA, and Treasury. The purpose of the working group is to better align rental requirements across programs, and thereby increase the effectiveness of federal rental policy and improve participant outcomes. According to working group documents, the group established guiding principles, which centered on administrative changes that could help respond to the concerns of external stakeholders (rental housing owners, developers, and managers, and state and local housing agency officials); required minimal statutory action; were realizable at little or no cost or through education, outreach, or the issuance of new guidance or rules; and helped create cost and time savings for all parties. The working group solicited recommendations for improved rental policy coordination from external stakeholders. Within the working group, interagency teams considered the recommendations, reviewed current policies, and identified opportunities for greater federal alignment, increased overall programmatic efficiency, and reduced costs and regulatory burdens. Stakeholders have noted that inefficiencies can arise when a multifamily housing project has multiple layers of assistance (such as subsidies, tax expenditures, or mortgage insurance) from one or more federal agencies. To help address those inefficiencies, the working group identified 10 key areas or initiatives for alignment and further study, based on recommendations from rental housing owners, developers, and managers, and state and local officials (see table 9). Overall, the initiatives are aimed at reducing unnecessary program regulations, lessening administrative barriers so that developers and property owners more easily can participate in programs, reducing duplicative administrative actions to reduce costs for agencies and program participants, and increasing coordination to allow better targeting of agency resources. For two initiatives, HUD, USDA, and other federal and state housing agencies have pilot programs under way in several states to test the alignment activities before national implementation. Specifically, two pilots will assess the feasibility of the proposed changes to physical inspections and subsidy layering reviews and identify steps for better coordination and information-sharing for potential replication on a national scale. As of April 2012, the participating state HFAs and federal agencies had signed memorandums of understanding (MOU) detailing roles and responsibilities. The working group plans to develop recommendations from the pilot findings. In comparing the Rental Working Group’s efforts against the key practices that we previously identified to help agencies effectively collaborate, we found that HUD, USDA, and Treasury have taken steps that are consistent with a majority of the practices. In particular, the agencies, through the Rental Policy Working Group, defined and articulated a common outcome; established mutually reinforcing or joint strategies in soliciting suggestions from federal, state, local, and private officials; allocated resources and identified key initiatives, including estimating the resources and time frames necessary for implementation; agreed on roles and responsibilities, including designating a responsible lead office and participating offices to help implement the alignment activities; established compatible policies and procedures and collected and analyzed information that led to the prioritization and development of the recommendations for rental policy alignment; developed mechanisms to monitor, evaluate, and report on their efforts, established milestones for alignment activities, and launched pilots to test some alignment activities; and used performance-management systems to strengthen individual accountability for results for some senior agency executives. Finally, in some cases, the agencies used a more formal approach to collaboration, such as an MOU, to specify the roles and responsibilities of those involved in the alignment effort. Although the efforts of the Rental Policy Working Group are consistent with the majority of our key practices, the working group has not yet taken additional steps to reinforce agency accountability for collaborative efforts. Methods to build accountability for collaborative efforts include documenting those efforts (and associated goals, strategies, roles and responsibilities, actions or measures to be taken, and timelines) in the agencies’ annual and strategic plans. Our review of the agencies’ recent annual and strategic plans found that none of the agencies in the working group had included their collaborative efforts. By not including their collaborative efforts in the plans, the agencies have not taken full advantage of opportunities to further build accountability for actions already taken, or underway. For example, they have missed opportunities to underscore the importance of their collaborative efforts agencywide. Furthermore, the Rental Policy Working Group efforts did not include any plans to deal with statutory changes that could help increase overall programmatic efficiency and reduce costs and regulatory burdens once the administrative changes were implemented. To achieve more immediate results, the Working Group started with those actions that required no statutory action. However, the working group’s long-term collaborative efforts could be enhanced if it were to include areas beyond administrative changes. According to USDA and Treasury, the Working Group’s efforts helped inform proposals in the President’s fiscal year 2013 budget (for legislative changes to the LIHTC program). By not expanding its guiding principles to include statutory changes, the agencies may miss additional opportunities to highlight those areas in which statutory action could help respond to additional stakeholder concerns and generate savings and efficiencies in housing programs. Such information about statutory changes also could help to provide relevant and useful information to policy makers as they consider overall improvements to HUD, USDA, and Treasury housing programs. As we recommended in September 2005 and reiterated in March 2011, coordinated reviews of tax expenditures and related housing spending programs with similar goals could help assess the relative effectiveness of tax expenditures in terms of their benefits and costs, and help policymakers reduce overlap and inconsistencies and direct scarce resources to the most-effective or least-costly methods to deliver federal support. As of April 2012, OMB had not used its budget and performance review processes to systematically review tax expenditures and promote integrated reviews of related tax and spending programs. GPRAMA could serve as a vehicle for furthering interdepartmental coordination of housing programs, including tax expenditures. As noted previously, in February 2012, the Administration announced 14 interim crosscutting policy areas, and some goals specifically identify tax expenditures as contributing activities. The combination of the LIHTC program with other federal, state, or local funding sources helps underscore the importance of assessing the effectiveness, costs, and benefits of tax expenditures in relation to housing programs. In 2007, we reported that using federal funds to leverage nonfederal funds can be a useful tool for financing affordable housing and that public and private-sector officials generally regarded it However, we also reported that leveraging at the project level favorably.can be challenging and inefficient, partly because federal, state, and local funding sources often have different application and other requirements and deadlines. As discussed previously, the Rental Policy Working Group was created in part to address these varying requirements. For this report, we interviewed developers and industry representatives, who estimated that leveraging different funding sources and the associated requirements and the time needed to put together the multiple funding sources necessary to make projects feasible increased project costs. For example, one multifamily developer told us that it typically took from 3 to 4 years to begin construction and leveraging the various funding sources typically added 5-10 percent to project costs. He stressed that the biggest factor in extending project lengths was the time needed to secure multiple funding sources, navigate and comply with multiple requirements, and align funding cycles. He added that obtaining LIHTCs also can slow the process because a project might not receive credits one year or might require more than one year’s worth of credit allocations from the state before it was feasible. The 2007 report also concluded that better information about combining multiple federal sources and amounts—from both tax and spending programs—for rental housing projects could be useful in identifying areas for agencies to coordinate program measurement. Although Treasury tracks taxpayer compliance with LIHTC program rules and HUD collects some information on a few other types of federal subsidy an LIHTC project might receive, neither agency collects leveraging data nor reports a leverage measure for the program. Basic financial information about the multiple sources and amounts—from tax and spending programs—a housing project received could be useful in identifying areas for agencies to coordinate in measuring performance for programs that have overlapping purposes. As we reported in 2008, while HUD and Treasury reported leverage measures that described the ratio of all other funds (federal, state, local, and private) compared with a specific program’s funds, alternative measures describing total federal investment provided considerably different results and could be of value to policymakers. To provide more accurate, relevant, and useful information to Congress and others, our 2008 report recommended that OMB provide guidance to help agencies determine how to calculate, describe, and use leverage measures in a manner consistent with their programs’ design; and reevaluate the use of such measures and disclose their relevance to program goals and in future performance reviews of housing programs. At the time, there was no agency-specific or government-wide guidance on what agencies should disclose about the leverage measures they reported or how to calculate them for specific programs. Although OMB has used leveraging as a program output measure in the past, as of April 2012, OMB had not taken action to issue guidance for agencies calculating leverage measures. Better measures of the total federal support and mix of funding would be helpful in better understanding how tax expenditures contribute to rental housing project outcomes and identifying areas of overlap for further coordination. Furthermore, additional data could help assess how tax expenditures benefit homeownership compared with programs with similar goals. This information is currently not always collected on tax returns unless IRS needs the information or collection was legislatively mandated. We recommended in 2009 and 2010 that IRS collect property addresses (which can differ from a taxpayer’s mailing address) to improve enforcement of mortgage interest deductions. Collecting this information from taxpayers or lenders also could facilitate analysis of who benefits from the mortgage interest and property tax deductions as well as other housing tax provisions. As of April 2012, IRS had not yet taken action to collect property address information. Consolidation or greater coordination of RHS and HUD single-family loan programs that serve similar markets and provide similar products may offer opportunities for savings in the long term. For example, program consolidation could improve service delivery, especially when programs with similar objectives and markets are brought together and conflicting requirements and overlap reduced. Consolidation could achieve savings to the extent that agency overhead and, potentially, staffing were reduced. Further, consideration of program consolidation could create opportunities to reassess the various RHS and HUD single-family programs or activities and eliminate programs that are overlapping, or outdated, or whose costs no longer justify federal spending. However, consolidation also presents a number of challenges that we discuss later in the report. We first reported in 2000 that overlap exists among products offered and markets served by FHA, RHS, and others and questioned the need for maintaining separate programs for rural areas. Additionally, we noted the potential for administrative savings by consolidating programs that provided similar products and served similar markets. For instance, FHA and RHS offer similar guaranteed single-family products and operate in the same areas. With VA, which offers a guaranteed loan program, FHA and RHS encourage lenders to make loans by guaranteeing them against losses they might incur if borrowers defaulted on their mortgages. As discussed earlier, lenders in FHA and RHS programs use FHA’s mortgage scorecard in evaluating borrowers for mortgages. However, RHS’s program offers more generous terms than FHA’s program (such as no down payment and lower overall mortgage insurance premiums). And RHS’s single-family direct loans have no counterparts in FHA or VA. Also, VA loan guarantees are an entitlement only available to veterans, certain members of the Reserves and National Guard, and spouses of certain veterans. RHS guaranteed loans are limited by borrower income and location. Despite the differences, we noted that FHA, RHS, and VA all serve a significant share of low- to moderate-income households. We suggested in September 2000 that Congress consider requiring HUD and USDA to examine the benefits and costs of merging those programs that serve similar markets and provide similar products. Recognizing the statutory restrictions that exist on both agencies’ programs, as a first step we suggested that Congress consider requiring HUD and USDA to explore merging their single-family guaranteed lending programs and multifamily portfolio management programs, taking advantage of the best practices of each and ensuring that targeted populations were not adversely affected. Congress held hearings on the report in 2003 but no further actions have been taken. Our analyses have shown evidence of overlap in certain aspects of the FHA and RHS single-family programs. First, RHS increasingly has moved from direct to guaranteed loans. The number of guaranteed single-family loans first exceeded the number of direct single-family loans in 1995, and the trend has intensified since 2008. In fiscal year 2010, RHS made more than 28,100 single-family direct loans and grants and guaranteed more than 130,000 single-family loans. Since 2011, the Administration has requested large cuts in RHS direct loan programs. For example, the 2012 President’s Budget did not request any funding for Section 504 direct repair loans and requested a 67 percent reduction for Section 502 direct loans. The budget request stated that the shift in direction acknowledges that the single-family direct loan program has struggled to make a measurable impact due to flat funding levels and a labor-intensive review process. According to RHS officials, after the implementation of early-out and buy-out authority at the beginning of fiscal year 2012, RHS had about 900 full-time equivalent staff managing its direct loan program, and about 400 staff managing its larger guaranteed loan program. Second, the Administration has proposed that RHS use direct endorsement lenders to approve guaranteed loans. Specifically, the 2011 and 2012 President’s Budgets proposed that RHS use direct endorsement lenders in its guaranteed loan program to make RHS more efficient and allow time to transition the staff managing guaranteed loans to other priorities. endorsement lenders to approve mortgage applications without first submitting paperwork to HUD. As of September 2011, it had about 3,700 such lenders. The 2013 President’s budget did not propose that RHS use direct endorsement lenders. on participants and the benefit of aligning the requirements of these programs. HUD and RHS face similar challenges in managing their portfolio of affordable rental properties. Properties assisted by both agencies are aging and need new investments for capital improvements. Also, some property owners may decide to leave the programs and convert their properties to market rate and no longer be subject to rent and tenant income requirements. In response to these challenges, RHS offers incentives that provide equity investments and favorable loan financing to property owners seeking to recapitalize their properties or at risk of exiting the program. Similarly, HUD has various financing tools that offer incentives to property owners to remain in the program. When property owners do exit the program, HUD and RHS offer special rental assistance to households to help ensure that their rents remain affordable. Further, similarities in guaranteed multifamily loans indicate the need for greater coordination. For instance, among multifamily loan programs, RHS programs (whether direct or guaranteed) are more prevalent in rural areas than the much larger FHA multifamily guaranteed loan program. However, RHS has been moving toward guaranteed multifamily loans, primarily as a leveraged source of funds when preserving its direct loan properties. Moreover, the 2013 President’s budget proposes funding for Section 538 guaranteed loans but not for Section 515 direct loans. According to RHS officials, the only new Section 515 direct loans being made are for preserving existing properties. As discussed earlier, properties with RHS loans also tended to be much smaller than properties with FHA loans, suggesting that RHS and its products have served a unique market segment and that RHS may have a product model that could be useful for FHA. Over the years, HUD has proposed variations of guaranteed loans for small properties, such as in more rural areas where HUD properties are smaller and more comparable in size to RHS properties. For example, HUD announced demonstration programs in 1997 and 2006 for variations of small project guaranteed loans. More recently, the Rental Policy Working Group discussed existing programs that HUD might use for smaller properties, including RHS’ Section 538 guarantee program. The discussions resulted in the Rental Policy Working Group developing a proposal for the 2013 budget that would allow HUD to implement flexibilities with its Section 542(b) risk- share program to make risk-share loans to refinance, rehabilitate, and recapitalize small properties. The proposal would allow HUD to use this existing program to expand availability of capital to small properties. If successful, this program could be used in urban and rural areas, as HUD has no geographic restrictions. While statutory action would be needed for HUD to implement the changes to its Section 542(b) risk-share program, the working group focused on this effort because it required only minor statutory changes. Consolidation presents a number of challenges in the short and long term. These include overcoming statutory barriers; assessing products to be offered; establishing effective delivery structures; aligning resources, policies, and requirements; and ensuring continuing oversight and performance of existing commitments. Potential for savings in the long term must be weighed carefully against the immediate challenges and against the potential implications of consolidation for agency goals and objectives and households served. We previously reported on questions agencies should consider when evaluating whether to consolidate and noted that identifying and agreeing on specific consolidation goals and realistic expectations for their achievement are the key to any consolidation effort.groups should not rule out studying the potential for consolidation if the potential for long-term savings through better alignment of resources and delivery structures outweighs the challenges and long-term costs. For example, VA’s housing program is an entitlement earned by veterans and RHS’s guaranteed program is only available to low- and moderate- income households in rural areas. And HUD operates the Good Neighbor Next Door program, which restricts eligibility by profession (for example, to teachers and law enforcement officers). But the fact that programs serve different targeted Several of the immediate challenges that would stem from any consolidation efforts have long been a concern for the agencies. In 2000, when we first recommended that Congress consider requiring USDA and HUD to examine the benefits and costs of merging programs such as their single-family guaranteed programs, USDA noted that such a merger could be detrimental to rural areas, which could lose a federal voice. In addition, HUD noted that without legislative changes, any efforts to merge the programs likely would result in a more cumbersome delivery system. In May 2011 testimony before the House Financial Services Subcommittee on Insurance, Housing, and Community Opportunity, some industry experts said a proposed consolidation plan merited further discussion, but others stated the proposal could negatively affect USDA’s efforts to deliver its other rural development programs. In September, the RHS Administrator testified that while RHS and HUD shared an important commitment to meeting the housing needs of rural America, she opposed the proposed consolidation plan. She said that RHS housing services uniquely served rural communities by working in “synergy” with other rural development programs. Since then, RHS officials and several housing industry officials with whom we spoke also have raised concerns about consolidating RHS and HUD programs. They have argued that rural housing assistance is a part of the community development package that USDA’s rural development agencies (RHS, Rural Utilities Service, and Rural Business Cooperative Service) can offer and that consolidating RHS programs into HUD would disrupt the interrelationship between the three rural development agencies in USDA. RHS officials pointed to the human capital challenges that would arise from any consolidation. For example, they noted that training would be an issue because product requirements, information systems, and agency processes and procedures differ between HUD and RHS. In addition, they questioned whether any consolidation would help improve the delivery of service to rural areas. RHS officials and industry officials expressed concern that rural guaranteed single- and multifamily programs would get “lost” in HUD. Some RHS and industry officials also noted that program consolidation would not address the gap in access to affordable housing credit for those individuals who could not qualify for HUD or other conventional single-family loans. While training and information systems are important considerations for any consolidation or increased coordination between the agencies, consolidation or increased coordination does not necessarily require that product terms be aligned. FHA already offers multiple products with different terms and conditions. And although FHA does not have the extensive delivery structure RHS uses to perform loan origination under its now diminishing direct loan program, the continuing need for this RHS product has been questioned by USDA and OMB. Moreover, the more similar the products become, the stronger the argument for a consolidated delivery structure. In the long term, this could present an opportunity for the agencies to take advantage of the best features of each agency’s structure. In relation to concerns about the level of focus on rural housing in HUD, HUD currently serves a larger number of homeowners in rural areas than RHS serves, and HUD administration officials told us that they considered HUD an agency that served housing needs in all communities—urban, suburban, and rural. Also, while RHS housing programs align with several of HUD’s priority goals, USDA currently does not have priority goals for housing, and housing programs have not been a high priority in USDA. For multifamily housing, we first reported in 2002 that RHS could not prioritize the long-term rehabilitation needs of the properties in its Section 515 direct loan portfolio. The fiscal year 2013 budget is the first in which the agency is requesting funding for a permanent (in place of a current demonstration program) multifamily preservation and revitalization program for its rural rental housing portfolio. As described earlier, we reported in 2004 that RHS did not have access to the same wage matching data as HUD to assure that rental payments under the Section 521 rental assistance program were accurate. USDA proposed legislation to access Department of Health and Human Services data for wage matching purposes in the fiscal year 2013 budget. In addition, the administrative and reporting structure of rural housing programs among USDA components has varied. As we reported in 2000, the position of RHS Administrator is at the same organizational level as the State Rural Development Offices, which can develop their own program delivery systems. As a result, state offices still report to the Under Secretary for Rural Development rather than the RHS Administrator on housing issues. The state offices also have developed various interpretations rather than uniform standards on issues ranging from rent calculations to loan prepayment. Combining programs would not eliminate the need for managing existing commitments. Both FHA and RHS have loan guarantees with terms of as much as 40 years. In the single-family programs, both agencies have systems in place for monitoring the performance of existing mortgages and ensuring that loan servicers and contractors carry out functions related to loss mitigation, foreclosure, and property management, as well as systems for holding lenders accountable in the loan origination process. The continuing need for these functions would necessitate careful planning and alignment to permit consolidation. Consolidating or coordinating existing programs and activities also raise important implications because of costs, and the potential impact on people and agency mission. When consolidation or increased coordination results in significant shifts of people, space, technology and systems, several issues arise. As an example, simply moving staff and responsibilities could increase costs and not result in any process improvements. Ensuring long-term benefits from cost savings and improved operations will require careful consideration of the responsibilities and staffing resources needed for the combined operation. For example, if the single-family loan programs of RHS and HUD were to be consolidated, it would be necessary to specify the impact on employees, including changes in roles and responsibilities, processes and procedures, individual accountability, and day-to-day operations. There also would be transition issues to consider, such as costs of leases and unoccupied federal property, or moving expenses for employees transferred to new sites. Consolidation or increased coordination also may have implications for borrowers, lenders, developers, and other industry participants. For instance, some borrowers and lenders who may have worked extensively with particular programs could experience increased costs in the short term for adapting existing program administration, personnel, processes, and systems. Whether through consolidation or further coordination, RHS, FHA, and VA have opportunities to assess the potential for learning from the practices of each other. RHS did this when its guarantee program was created. For instance, RHS officials told us that they had examined FHA’s system when they established their guaranteed program and decided it would be more cost-effective to require lenders to dispose of properties. Thus, unlike FHA, RHS relies on lenders to take title of foreclosed properties and manage and market them. But, RHS and FHA have not taken steps to further explore the relative benefits and costs of each other’s approaches. This and other areas may represent an opportunity for the agencies to explore how to take advantage of their respective best practices, while minimizing the adverse impact on targeted populations. Finally, combining management of the portfolio of existing multifamily projects might require reassessing methods for overseeing and monitoring these projects. Some noted that in RHS, staff were responsible for a particular portfolio of multifamily projects and offered a direct point of communication for these projects. They pointed out that HUD, which provides funding for far more projects, did not have staff responsible for individual projects. Also, payment structures for RHS- direct multifamily loans are linked and are offset by RHS rental assistance payments. Combining RHS and HUD rental assistance programs would require assessing the implications of aligning payment methods for the two programs. The federal government plays an important role in encouraging homeownership and ensuring the availability of decent, safe, and affordable rental housing through a variety of single- and multifamily programs that provide rental assistance, public housing, and tax expenditures. Numerous agencies administer these fragmented programs, and recent assessments have shown that some programs overlap (that is, provide similar products and serve similar populations). Ongoing fiscal constraints and the accompanying move toward greater use of guaranteed lending and leveraging of federal funds with other public and private funding sources have called into question the feasibility of maintaining the current fragmented structure for providing support to housing and, in particular, the overlap in certain housing programs. Policymakers and agencies have been tasked with continuing to meet affordable housing needs while protecting taxpayer investments. Consolidation and improved collaboration can offer an effective means of realizing necessary cost savings and eliminating unnecessary overlap. While consolidation and improved coordination efforts are underway, they could be improved and expanded to help ensure that agencies do not miss opportunities to generate savings and efficiencies in their housing programs. A recently created task force may help evaluate the potential for coordinating or consolidating the single-family housing loan programs at HUD, USDA, and VA and the agencies have been working to consolidate and align certain requirements in multifamily housing programs through the Rental Policy Working Group. However, the single- family task force has not yet specified its goals or expected outcomes, roles and responsibilities, resources, or a means of monitoring or reporting on results and reinforcing agency accountability for collaborative efforts. Incorporating these key practices for effective collaboration would help the task force and HUD, USDA, and VA establish the guidance and direction needed to systematically bring about a productive working relationship. With a more effective collaborative approach, the agencies also can generate opportunities to evaluate the potential for improving, coordinating, or consolidating single-family housing loan programs. Certain aspects of the single-family programs show great potential for consolidation—as we have reported, overlap exists in products offered, service delivery, and geographic areas served. Therefore, the task force agencies could productively focus on the products offered, delivery structure, and systems and resources that support the programs as part of any assessment of coordination and consolidation of the programs. For instance, agencies could consider whether and how to align product terms and conditions, and how to optimize service delivery methods. Or, they could move beyond administrative change, and assess what might be accomplished in terms of coordination or consolidation through statutory changes. Such assessments represent valuable first steps and would serve as resources for the agencies. The Rental Policy Working Group, which has followed a majority of our key collaboration practices, already has taken steps to identify specific areas in which to align sometimes conflicting and redundant requirements. But it focused on actions that require minimal or no statutory changes, or minimal or no costs. Overlap in multifamily programs exists in the overall purpose of programs, delivery structures, and provision of project-based rental assistance. However, any consolidation of multifamily programs would require statutory changes. There is more the Rental Policy Working Group can do to build on its success. For example, it could expand its guiding principles to include areas in which statutory action across individual agencies and programs may be needed to help increase overall programmatic efficiency and save additional taxpayer dollars. It could take additional steps to reinforce agency accountability for collaborative efforts by documenting collaborative efforts in its strategic and annual plans. In addition to the two efforts highlighted above, and as we previously recommended, coordinated reviews of tax expenditures with related spending programs could help reduce overlap and inconsistencies and direct scarce resources to the most effective or least costly methods to deliver federal support. Options to increase collaboration or to effect consolidation in HUD and USDA’s single- and multifamily loan programs that serve similar markets, provide similar products, or have similar delivery structures could enhance the efficiency of and improve the programs overall. But as we have noted, they are not without a number of human capital, information technology, and other significant challenges and implications. We first reported on these options in 2000. The potential exists for greater collaboration or consolidation, including considering statutory action, if applicable. Policy makers face difficult decisions on the structure and funding of housing programs and activities across federal agencies. Although Congress ultimately would have to decide, agencies could further this effort by exploring the potential benefits and costs of consolidating overlapping programs. Such analyses represent a key step on the path to determining the viability of consolidation. The analyses also can support the Administration’s efforts to reform the government’s role in housing finance. To enhance task force efforts to evaluate the potential for coordination or consolidation of single-family housing programs and activities, the Secretaries or other designated officials of HUD, USDA, and VA, and the Director of OMB should take steps to establish a more rigorous approach to collaboration. For example, as a first step, agencies could define and articulate goals or common outcomes and identify opportunities that can be addressed or problems solved through their collaborative efforts. Enhancing the task force’s efforts also could entail establishing and implementing a written agreement; specifying roles and responsibilities; establishing mechanisms to monitor, evaluate, and report on results; and reinforcing accountability for collaborative efforts. To further improve HUD, USDA, and Treasury’s efforts through the Rental Policy Working Group to consolidate and align certain requirements in multifamily housing programs, the Rental Working Group should take steps to document collaborative efforts in strategic and annual plans to help reinforce agency accountability for these efforts. To build on task force and working group efforts already underway to coordinate, consolidate, or improve housing programs, and help inform Congress’s decision-making process, the Secretaries or other designated officials of HUD, Treasury, USDA, and VA should evaluate and report on the specific opportunities for consolidating similar housing programs, including those that would require statutory changes. GAO provided a draft of this report for review and comment to the Consumer Financial Protection Bureau, Department of the Interior, Department of Labor, Farm Credit Administration, Federal Deposit Insurance Corporation, Federal Financial Institutions Examination Council, Federal Housing Finance Agency, Board of Governors of the Federal Reserve System, HUD, IRS, National Credit Union Administration, NeighborWorks America, Office of the Comptroller of the Currency, OMB, Treasury, USDA, and VA. HUD’s Acting Assistant Secretary for Housing-Federal Housing Commissioner, USDA’s Under Secretary for Rural Development, and VA’s Chief of Staff provided written comments, which we address below and which are reprinted in appendixes II, III, and IV, respectively. OMB staff provided a general comment by e-mail. The Department of Interior, Federal Financial Institutions Examination Council, Board of Governors of the Federal Reserve System, HUD, IRS, Treasury, USDA, and VA provided technical comments, which we incorporated as appropriate. The Farm Credit Administration, Federal Deposit Insurance Corporation, National Credit Union Administration, NeighborWorks America, and Office of the Comptroller of the Currency stated they had no comments. Finally, the Consumer Financial Protection Bureau, Department of Labor, and Federal Housing Finance Agency provided no comments. HUD stated that the report accurately reflected HUD’s collaborative efforts and agreed with the report’s recommendations. However, in response to the recommendation that HUD and the other agencies establish a more rigorous approach to collaboration, HUD noted the importance of assessing the timing of implementing the recommendation because the relevant agencies have been fully focused on the ongoing recovery of the housing market. OMB staff expressed a similar concern. HUD further stated that it will consult with other interested parties to establish a framework through which to respond to our recommendation and noted that an approximate time frame might involve waiting until after February 2014. As we stated in the report, in addition to focusing on the ongoing housing crisis and government support for the housing market, focusing on achieving efficiencies and cost savings and the delivery of government support for housing is important. By incorporating key practices on collaboration and developing a more formal approach to the single-family task force, HUD, USDA, VA, and OMB can evaluate the potential for coordinating or consolidating single-family loan programs, and possibly generate savings and efficiencies while better serving homeowners. As we noted in the report, whether through consolidation or further coordination, RHS, FHA, and VA have opportunities to assess the potential for learning from the practices of each other. VA concurred with the recommendation to enhance the single-family task force’s efforts. VA said that the agency welcomed opportunities to coordinate with other agencies and share best practices and looked forward to refining and improving its own program by applying other agencies’ best practices. VA concurred in principle with our recommendation on identifying opportunities to evaluate and report on opportunities for consolidation as long as the efforts were coordinated and not unilateral, adding that unilateral actions could waste resources and have other negative effects. We modified the recommendation to make it clear that we were referring to efforts through the interagency task force and working group and not unilateral evaluations. In addition, VA reiterated its position that while collaborating and coordinating with other housing programs could be beneficial, combining VA’s unique home loan guaranty program with other housing programs would go against the statutory intent that established an earned benefit for veterans. USDA generally agreed with our recommendations, stating that collaborative efforts already under way should reduce duplication of efforts by stakeholders working with multiple agencies as well as “bureaucratic red tape, processing times, and ultimately program costs.” USDA also provided a summary of the agency’s positions on a number of other issues. First, USDA stated that RHS’s single-family guaranteed loan program has been performing better than FHA’s loan programs because RHS controlled risk by tightening underwriting and performing preclosing reviews. RHS has reported lower delinquency rates than FHA, and concluded that these differences were due to the tighter controls. However, RHS’s analysis does not control for other factors that could explain differences between the agencies’ delinquency and default rates. Second, RHS suggests that borrowers of its guaranteed loans could not afford FHA-guaranteed loans, but it has not conducted the analysis needed to make this judgment. RHS’s lower fees and lack of down payment could divert prospective borrowers from programs such as FHA’s, which could offer further evidence of the overlap in federal mortgage products. Finally, USDA reiterated its position that rural communities have a unique set of challenges and that rather than duplicating other federal programs, USDA’s housing programs address unique needs. However, we found that HUD also serves rural areas through its single- and multifamily programs. Further, RHS’ greater reliance on guaranteed single-family lending has lessened the differences between RHS and FHA single-family loan programs; for example, more than half of the new RHS-guaranteed single-family loans made in 2009 were in urban or suburban areas. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Agriculture, the Secretary of Housing and Urban Development, the Acting Director of the Office of Management and Budget, the Secretary of Veterans Affairs, the Secretary of the Treasury, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or [email protected], or Jim White at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. This report assesses (1) programs and activities the federal government uses to support rental housing and homeownership; (2) the extent to which overlap or fragmentation exists in the goals, products, geographic coverage, service delivery mechanisms, and recipient income levels of selected housing programs and activities of the Departments of Housing and Urban Development (HUD), Agriculture (USDA), Veterans Affairs (VA), and Treasury; and (3) the extent to which federal efforts have increased coordination for selected housing programs and activities, and implications of further coordinating or consolidating selected housing programs or activities. For purposes of this study, we defined duplication, overlap, and fragmentation. Duplication occurs when two or more agencies or programs engage in the same activities or provide the same services to the same beneficiaries. Overlap occurs when multiple programs have similar goals, devise similar strategies and activities to achieve those goals, or target similar users. Fragmentation occurs when more than one federal agency (or more than one organization within an agency) is involved in the same broad area of national interest. To identify federal agencies’ support for housing in fiscal year 2010, we compiled an inventory of direct spending programs, tax expenditures, and other activities, such as regulatory requirements—to which we collectively refer as “activities”—related to housing. To identify programs, we first collected information on programs categorized as housing programs from the Catalog of Federal Domestic Assistance. We also reviewed the fiscal year 2012 President’s Budget; program documentation from HUD, USDA, and VA; studies by the Congressional Research Service, Congressional Budget Office (CBO), and other housing groups; and the Compendium of Federal Single Family Mortgage Programs and Related Activities. We collected descriptive information about each program, including (1) the administering or implementing agencies or entities; (2) type of assistance provided; (3) eligibility of recipients in terms of geographic or income restrictions; and (4) other relevant nonfederal entities involved in administering, distributing, or delivering federal assistance, if any. We compared the programs among the sources described above to create an inventory of federal support for housing. We excluded certain programs that can support housing but were covered in our other recent reports on duplication, overlap, and fragmentation. For example, the inventory does not include housing counseling programs that we covered in 2012 Annual Report: Opportunities to Reduce Duplication, Overlap, and Fragmentation, Achieve Savings, and Enhance Revenue, GAO-12-342SP (Washington, D.C.: Feb. 28, 2012), or homeless housing programs that we discussed in two March 2011 reports—Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue, GAO-11-318SP (Washington, D.C.: Mar. 1, 2011); and List of Selected Federal Programs That Have Similar or Overlapping Objectives, Provide Similar Services, or Are Fragmented Across Government Missions, GAO-11-474R (Washington, D.C.: Mar. 18, 2011). Additionally, we excluded Federal Trade Commission and Department of Justice enforcement efforts related to certain consumer protections. In some cases, names of programs were inconsistent among the various sources we reviewed. As a result, our usage either conformed with program names as cited in past GAO reports or with agency documents. Our list of 15 housing tax expenditures is based on lists of tax expenditures and estimates of their cost compiled annually by Treasury and the Joint Committee on Taxation (JCT). Both Treasury and JCT list tax expenditures by budget function. We compiled a preliminary list of tax expenditures for fiscal year 2010 listed under the “housing” subfunction of the commerce and housing budget function, and added other housing- related provisions listed under other budget functions. Our universe included expired tax expenditures listed by either Treasury or JCT that had estimated revenue losses for fiscal year 2010. While the tax expenditure lists were generally similar, Treasury and JCT’s method for reporting specific tax expenditures differed slightly. For the 15 tax expenditures we listed, both Treasury and JCT listed nine under the housing budget subfunction. Treasury and JCT each reported another tax expenditure, but grouped the expenditures under different functions. Two tax expenditures were listed only by JCT, and another only by Treasury. Furthermore, we identified one tax expenditure that both Treasury and JCT reported under the veterans’ benefits and services budget function, but which appeared to support housing activities. We also identified another tax expenditure that neither Treasury nor JCT reported annually, but which JCT identified in a separate report on housing tax incentives. We did not include in our list two tax expenditures Treasury reported under the housing budget subfunction because JCT did not list one and listed the other under the “other business” subfunction. We also did not include a tax expenditure that JCT reported under housing, but Treasury reported under the community development function. As a final step, we Officials compared our list with similar lists of housing tax expenditures.from Treasury and the Internal Revenue Service (IRS) also reviewed our final list of housing tax expenditures before publication of , GAO-12-555SP the e-supplement to this report that lists the tax expenditures as part of all federal housing activities. To identify regulatory requirements, we included financial regulators (whose responsibilities include helping ensuring that regulated institutions comply with consumer financial protections) and regulators of government-sponsored enterprises (government agencies that provide oversight and supervision of government-sponsored enterprises) and focused on those regulations affecting participants in the housing market, including lenders, consumers, and others involved in homeownership and rental housing. For example, we included the Farm Credit Administration because Farm Credit System associations are authorized to engage in rural housing lending under the agency’s regulations. We also include as an administering entity the Federal Financial Institutions Examination Council, which is a formal interagency body that prescribes uniform principles, standards, and report forms for the federal examination of financial institutions and makes recommendations to promote uniformity in the supervisions of financial institutions. We relied on our recent reports related to federal mortgage lending laws: Mortgage Reform: Potential Impacts of Provisions in the Dodd-Frank Act on Homebuyers and the Mortgage Market, GAO-11-656 (Washington, D.C.: July 19, 2011); and Mortgage Foreclosures: Documentation Problems Reveal Need for Ongoing Regulatory Oversight, GAO-11-433 (Washington, D.C.: May 2, 2011). To summarize federal support for homeownership and rental housing, we reviewed descriptive information about each activity. To characterize the primary purpose for each, we identified 11 categories that illustrate the primary public policy goals associated with each activity, and used the best available information to make a determination. In selecting the categories, we focused on the title, mission, objective, or goal of each activity and made a judgmental determination about common groupings for the activities. We also shared the categories and related descriptions with the responsible agencies. The 11 purposes that we identified are listed below: Assistance for buying, selling, or financing a home – assistance to individuals who are purchasing or refinancing a home or a preferential tax treatment on the sale of a home. Also, certain assistance to homeowners who are having difficulty making their mortgage payments. Assistance for homeowners – assistance to current homeowners to improve or change their properties, or tax expenditures that allow homeowners to deduct costs associated with homeownership. Increasing the availability of mortgage loans – actions taken to provide additional liquidity in the housing market, allowing private and government lenders to make additional mortgage loans. Assistance for financing rental housing – financial assistance for the production or preservation of rental housing. Assistance for rental property owners – financial assistance to owners of rental properties for units rented to low-income tenants, or tax expenditures that reduce the after-tax costs associated with owning and maintaining rental property. Rental assistance for tenants – payments on behalf of tenants to reduce their rent payments. Operation/management of rental housing – financial assistance to current owners of rental housing for the operation or management of rental housing. Regulatory requirement – regulations affecting participants in the housing market, including lenders, consumers, and others who buy, sell, or rent housing. Supports housing and other activities – activities that support any of the above activities under rental housing and homeownership. Also, activities in other areas in which the federal government is involved that have indirect effects on housing. Regulator of government-sponsored enterprises – government agencies that provide oversight and supervision of government- sponsored enterprises. Emergency assistance to housing market or current homeowners – actions taken to stabilize the housing market or provide financial assistance to homeowners to make their mortgages more affordable; or to provide temporary assistance through the tax code for homeowners. We used a two-step process to independently assign each activity a primary purpose based on the descriptions listed above, but because many of the activities we reviewed have multiple purposes, we further characterized the type of housing assistance for each activity as related to (1) homeownership, (2) rental housing, or (3) homeownership and rental housing (both). First, an initial determination was made about the primary purpose and type of housing supported for each activity. Second, each determination was independently reviewed to verify the category assignments. When needed, the activity and category in question were discussed. For the tax expenditures, we also compared our selections with how others, including the Congressional Research Service, CBO, and JCT had described the purpose or activity for housing-related tax expenditures. Finally, we shared the inventory with the responsible agencies and incorporated their comments as appropriate. We also identified the type of assistance associated with each activity in our inventory. In some cases, the agencies provided program dollars to an entity such as a nonprofit or local government that administered the funds to serve the primary targeted recipient. For the purposes of this report, we used “type of assistance” as it relates to the primary targeted recipient. Generally, the programs in our inventory provided the following types of assistance: Grant – to any other governmental or nongovernmental entity, or Direct payment – to property owner, homeowner, tenant; Direct loan – from government agency direct to borrower; Guaranteed loan – through approved private lenders; Insured loan – through approved private lenders; Block grant – to other nonfederal governmental entities that have flexibility on use of funds; Tax exclusions, exemptions, or deductions; Tax credits; and Deferrals of tax. The inventory also contains budgetary information for each activity we identified for fiscal year 2010. To determine the budgetary obligations for spending programs, we reviewed the fiscal year 2012 President’s Budget and agencies’ budget justifications for fiscal year 2012, which contained the actual obligations for fiscal year 2010. To determine the revenue loss estimates for tax expenditures in fiscal year 2010, we reviewed the annual lists of tax expenditures Treasury and JCT compiled. Some of the activities in our inventory incurred no obligations in fiscal year 2010 for a number of reasons; for example, the activity was not part of the federal budget or was inactive in that year. We determined the data and information collected related to each activity and fiscal year 2010 budgetary information to be sufficiently reliable for the purposes of this report. We confirmed information found in the President’s Budget for fiscal year 2012, agencies’ budget justifications, and agency documentation with agency officials. To determine the extent to which overlap or fragmentation occurred in the selected housing programs or activities of HUD, USDA, VA, and Treasury, we updated and expanded the work from our 2000 report on opportunities and barriers to reducing overlap and fragmentation in We focused delivering single-family and multifamily housing programs.on selected single-family and multifamily programs at HUD, USDA, and VA, and on Treasury’s Low-Income Housing Tax Credit (LIHTC) program and the mortgage interest and property tax deductions as of 2010. We identified housing programs that may have similar or overlapping objectives, provide similar services to similar beneficiaries, or are fragmented across missions. For single-family programs, we included the federal loan guarantee programs from USDA, VA, and the largest of HUD’s program; the direct loan programs at USDA; and Treasury’s two largest tax expenditures that provide assistance to homeowners. Specifically, the single-family programs included in our scope were One- to Four-Family Home Mortgage Insurance (Section 203(b)); Rural Housing Single Family Loans - guaranteed (Section 502 Rural Housing Single Family Loans - direct (Section 502 direct); Very Low-Income Direct Repair Loans and Grants (Section 504); VA Home Loan Guaranty; Mortgage Interest Deduction; and Property Tax Deduction. For multifamily housing programs, we included programs that finance multifamily housing and programs that provide project-based rental assistance. As USDA has fewer housing programs, we selected these first, then selected the active programs at HUD and Treasury with similar purposes. For example, while HUD administers many programs that provide loan guarantees for multifamily housing, we selected HUD’s Section 221(d)(3) and (d)(4) programs because they are most similar to USDA’s Section 538 loan guarantee and because they had the most loan activity of HUD’s programs. The other selected HUD, USDA, and Treasury programs are similar in that they require that the owner keep the properties available to the eligible populations or keep the rents affordable or both. Finally, USDA’s Section 521 provides rental assistance to property owners for units rented to low-income tenants. Similarly, HUD’s project-based rental assistance provides payments to property owners for the same purpose; therefore, we decided to select HUD’s project-based rental assistance (Section 8, Section 202, Section 811, and other rental supplement programs). We excluded some large HUD multifamily housing programs from this analysis because there were no similar housing programs at USDA. For example, the public housing and housing choice voucher programs were excluded. Specifically, the multifamily programs included in our scope were Supportive Housing for the Elderly (Section 202); Supportive Housing for Persons with Disabilities (Section 811); Mortgage Insurance for Rental and Cooperative Housing (Sections 221(d)(3) and (d)(4)); Project-Based Rental Assistance; Multifamily Direct Rural Rental Housing Loans (Section 515); Farm Labor Housing Loans and Grants (Sections 514 and 516); Rural Rental Housing Guaranteed Loans (Section 538); Rural Rental Assistance Payments (Section 521); and Low-Income Housing Tax Credit. We collected and analyzed information and data on the goals, program details, eligibility, product delivery, geographic locations, and populations benefiting from agency housing programs. We identified those housing programs that may have similar or overlapping objectives, provide similar services, or are fragmented across missions. Overlap and fragmentation may not lead to actual duplication, and some degree of overlap and duplication may be justified. We categorized locations based on three different USDA-developed characterizations of rural and urban, and analyzed agency data from the selected programs based on these characterizations. To do so, we geocoded (that is, mapped the geographic coordinates of) the addresses of properties supported by selected programs of HUD, USDA, VA, and Treasury. By comparing the frequency of properties or units within the type of county or zip code, we could assess the degree to which the three agencies operated in the same types of locations or operated within certain distances of the similar areas. We used MapInfo—a geographic information system designed to prepare maps and graphs that allow users to easily visualize connections between data and geography. For single-family loan analyses, we used a single year of data—active loans that were made in 2009 because they constituted the most recent data. For HUD and USDA multifamily programs, we used the portfolio as of February and May 2012, respectively. For Treasury’s LIHTC program, we used data on projects placed in service from 1998 through 2007 because they were the most reliable and complete data available and included LIHTC projects that remain within the 15-year tax credit compliance period. We analyzed the geocoded properties and units supported by the selected programs in three ways: We obtained RHS’ program eligibility map from USDA and analyzed the land mass and population that are represented by these areas. We used the geocoded locations of the single-family guaranteed loans to determine whether the properties were within RHS ineligible areas or calculated the distance to the nearest ineligible area. We analyzed the geocoded locations of the single-family and multifamily properties using the four-category version of the Economic Research Service’s rural urban commuting area codes. We reported in 2005 that categorization of smaller areas provides a more precise delineation of rural than the county-based rural-urban continuum. We distinguished four county categories by collapsing the nine categories in the Economic Research Service’s rural-urban continuum. We analyzed the geocoded locations of the single-family and multifamily properties using this categorization. Additionally, we analyzed borrower income and location data for the HUD and VA single-family guaranteed loan programs and compared borrower income with county-level data on area median income (AMI). For RHS’s single-family guaranteed loans, we used the program eligibility limit of 115 percent of AMI for borrower income and loan-level location data. We analyzed the locations of properties using the rural-urban continuum. To determine how RHS’s field structure has changed over time and determine the work breakdown by location within RHS programs, we analyzed field office location and full-time equivalent assignment data from RHS. Also, to determine the difference in the trends between the guaranteed and direct loan programs, we analyzed single-family historical loan data from RHS and Housing Assistance Council data. To assess the reliability of the data we used for geographic and income analysis, we conducted reasonableness checks, including testing the electronic data files for any missing or illogical data, reviewed existing information about data quality, interviewed officials familiar with the data, and corroborated key information. On the basis of this review, we determined that the data used were sufficiently reliable for purposes of our analysis. Furthermore, to perform our analysis of how different income levels and geographic areas claim the deductions for mortgage interest expenses and property taxes, we reviewed IRS zip code data for tax year 2008 (the latest zip code data available). The IRS zip code data include information for every zip code for which 250 or more returns were filed. Variables include the total number of tax returns filed, ranges of adjusted gross income (AGI) reported on those returns, and the total amounts of property taxes and mortgage interest deducted (as claimed on Form 1040 Schedule A, lines 6 and 10, respectively). To analyze how the two deductions are used by different income levels, we compared the share of total returns in each AGI range to the share of total mortgage interest and property tax deductions in each range. We also reviewed analysis by JCT about the distribution of tax expenditures for the mortgage interest and property tax deductions by income class in 2009. To analyze how taxpayers in different geographic locations used the deductions, we used the IRS data to calculate and compare the mortgage interest and property tax deductions claimed on tax returns from each state relative to each state’s share of total returns. We also used IRS zip code data to analyze use of the mortgage interest and property tax deductions in rural and urban areas. Using IRS data to analyze the geographic use of housing tax expenditures has some limitations. The IRS data reported by state and by zip code are based on the mailing address as reported by the taxpayer. However, some taxpayers may have used the address of a tax lawyer, accountant, or a place of business. Such addresses each could have been located in a state or zip code different than the state or zip code in which the taxpayer resided. Furthermore, taxpayers report the total dollar amount of mortgage interest or property taxes claimed, but do not report whether they were taking the deduction on their main home, a second home, or both. Finally, to determine what previous studies had found about usage of housing-related tax expenditures in various geographic locations, we conducted a literature review for studies on the geographic distribution of the mortgage interest and property tax deductions. We identified and reviewed four studies that had used IRS tax return or Census data to analyze the geographic distribution of the mortgage interest and property tax deductions. To determine the extent to which federal efforts have increased coordination for selected housing programs and activities, we collected and analyzed information, where available, on the efforts taken by HUD, USDA, VA, and Treasury to increase coordination or collaboration on selected housing programs. We reviewed documentation describing the efforts and obtained input from agency officials, including the Office of Management and Budget, on the single-family task force, White House Rural Council, and Rental Policy Working Group. We reviewed our prior work on interagency collaboration and key practices that can help enhance and sustain collaborative efforts, and compared the agencies’ efforts with our eight key collaboration practices to determine the extent to which the efforts were consistent with our key practices. As no law or regulation requires collaboration between HUD, USDA, VA, and Treasury, we relied on established practices and our views in examining consistency. We identified selected housing programs and activities that may benefit from greater coordination or consolidation as first stated in our 2000 report on housing programs and our prior work on tax expenditures, and supplemented that with the recent analysis of fragmentation and overlap described here. To identify some of the challenges and implications of coordinating or consolidating selected housing programs or activities, we reviewed prior GAO and other reports, and collected and analyzed information from housing industry, HUD, USDA, and Office of Management and Budget officials on potential proposals for mitigating duplication, overlap and fragmentation, and some of the challenges and implications of increased coordination or consolidation. We conducted this performance audit from July 2011 through August 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Mathew J. Scirè, (202) 512-8678 or [email protected] James R. White, (202) 512-9110 or [email protected]. In addition to the individuals named above, Andy Finkel, Assistant Director; MaryLynn Sergent, Assistant Director; Michelle Bowsky; Emily Chalmers; Andrea Dawson; Karen Jarzynka-Hernandez; Mark Kehoe; Anar Ladhani; John McGrail; John Mingus; Marc W. Molino; Alise Nacson; Barbara Roesmann; Erinn Sauer; Carrie Watkins; and Edwin Yuen made key contributions to this report. | The federal government plays a major role in providing housing assistance to homebuyers, renters, and state and local governments. It incurred about $170 billion in obligations for federal assistance and estimated forgone tax revenue in fiscal year 2010. However, fiscal realities raise questions about the efficiency of multiple housing programs and activities across federal agencies with similar goals, products, and sometimes parallel delivery systems. This report assesses the (1) extent to which there is overlap or fragmentation in selected housing programs, (2) federal collaborative efforts, and (3) implications of consolidating selected housing programs. For this report, GAO updated and expanded prior work and collected and analyzed new data, focusing on the largest programs in terms of funding. In addition to addressing these objectives, GAO developed a catalog of federal programs and activities that support rental housing and homeownership and identified what is known about the purpose, cost, eligibility, and populations served. The catalog (GAO-12-555SP) is an electronic supplement to this report. Housing assistance is fragmented across 160 programs and activities. Overlap exists for some products offered, service delivery, and geographic areas served by selected programsparticularly in the Department of Agricultures (USDA) Rural Housing Service (RHS) and Department of Housing and Urban Developments (HUD) Federal Housing Administration (FHA). For instance, RHS, FHA, and the Department of Veterans Affairs (VA) all guarantee mortgage loans for homeowners. According to fiscal year 2009 data (the most recent available), FHA served a larger number of households than RHS in all areas, including a larger number of low- and moderate-income households in rural areas. Although selected HUD, USDA, and Department of the Treasury (Treasury) multifamily programs had overlapping purposes, the products, areas served, and delivery methods differed. For example, HUD, RHS, and Treasury provide financing for development and rehabilitation of multifamily housing for low- and moderate-income households, but RHS-financed properties were more concentrated in rural areas and HUDs and Treasurys tax credit properties were more concentrated in urban and suburban areas. Opportunities exist to increase collaboration among the agencies and potentially realize efficiencies. In February 2011, the Administration announced a task force to evaluate the potential for coordinating or consolidating homeownership loan programs at HUD, USDA, and VA. But the task forces efforts have not yet incorporated key collaborative practices GAO identified. Practices such as identifying goals and resources and defining strategies and outcomes will be important as the task force moves forward. HUD, USDA, and Treasury also have been working to consolidate and align requirements in rental housing programs through the Rental Policy Working Group. Although its efforts have been consistent with many key collaborative practices, the group has not taken full advantage of opportunities to reinforce agency accountability for collaborative efforts through the agencies annual and strategic plans, or expanded its guiding principles to evaluate areas requiring statutory action to generate savings and efficiencies. Also, in 2005 and in 2011, GAO recommended coordinating reviews of tax expenditures and related spending programs. Such reviews could help reduce overlap and inconsistencies and direct scarce resources to the most effective or efficient methods to deliver federal support. Consolidating programs carries certain implications for users, existing programs, personnel, portfolios, and associated information systems. Nevertheless, GAO suggested in 2000 that Congress consider requiring USDA and HUD to examine the benefits and costs of merging programs serving similar markets and providing similar products. Since then, certain aspects of the RHS and FHA homeownership programs have shown evidence of growing similarity, such as RHS shift toward loan guarantees. However, the current statutory framework imposes additional challenges on the agencies ability to further consolidate similar programs. Thus, any evaluations of which programs, products, systems, and processes to retain, revise, consolidate, or eliminate would involve complex analyses, trade-offs, and difficult policy decisions. The task force offers opportunities for these agencies to identify potential areas for consolidation or greater coordination and which actions would require statutory change. To enhance evaluation of coordination or consolidation of single-family programs, HUD, the Office of Management and Budget (OMB), USDA, and VA should adopt a more rigorous approach for their task force that incorporates collaborative practices. To further improve initiatives to consolidate and align requirements in multifamily programs, HUD, USDA, and Treasury should document their efforts in annual and strategic plans. As part of these collaborative efforts, these agencies also should identify specific programs for consolidation, including those requiring statutory changes. HUD, USDA, and VA generally agreed with the recommendations; however, HUD and OMB stated that actions should wait until after the housing markets stabilize. GAO noted that achieving efficiencies and cost savings also were important. |
Treasury is the primary federal agency responsible for the economic and financial prosperity and security of the United States, and as such is responsible for a wide range of activities, including advising the President on economic and financial issues, promoting the President’s growth agenda, and enhancing corporate governance in financial institutions. To accomplish its mission, Treasury is organized into departmental offices and operating bureaus. The departmental offices are primarily responsible for the formulation of policy and management of the department as a whole, while the nine operating bureaus—including the Internal Revenue Service and the Bureau of Public Debt—carry out specific functions assigned to Treasury. Figure 1 shows the organizational structure of the department. Information technology plays a critical role in helping Treasury meet its mission. For example, the Internal Revenue Service relies on a number of information systems to process tax returns, account for tax revenues collected, send bills for taxes owed, issue refunds, assist in the selection of tax returns for audit, and provide telecommunications services for business activities, including the public’s toll-free access to tax information. To assist with delinquent debt collections, Treasury is engaged in the development of the FedDebt system. In fiscal year 2008, Treasury plans to spend approximately $3 billion for 234 IT investments— including about $2 billion (about 71 percent) for 60 major investments. In 2004, we identified weaknesses in Treasury’s IT investment management processes. For example, Treasury did not describe or document work and decision-making processes for agencywide board(s). Additionally, it did not use the IT asset inventory as part of managerial decision making. As a result of these and the other identified weaknesses, we made recommendations to the Secretary of the Treasury to improve the department’s IT investment management processes. In 2007, we reported that Treasury had made progress in establishing many of the practices needed to build an investment foundation and manage its products as a portfolio. However, we identified additional investment management weaknesses. Specifically, the department lacked an executive investment review board that was actively engaged in the investment management process. As a result of these weaknesses, we made recommendations to Treasury for strengthening their investment management capability. In response, Treasury stated that it would take steps to strengthen its investment board operations and oversight of IT resources and programs. For example, the department recently established an executive-level investment review board. In July 2008, we reported that Treasury’s rebaselining policy fully addressed one of five practices leading organizations include in their policies and partially addressed the remaining practices. Since the time of our review, Treasury has improved its rebaselining policies and procedures to be more consistent with those of leading organizations. Several of Treasury’s projects have been deemed to be poorly planned and managed by the OMB and have warranted inclusion on OMB’s Management Watch and High Risk Lists. In recent testimony summarizing our analysis of projects on these lists, we reported that Treasury had 4 projects on the Management Watch List as of July 2008, including one on the list for the fourth consecutive year. We also reported that the department had 21 high-risk projects determined to be poorly performing, most of them because of cost and schedule variances exceeding 10 percent. Pulling together essential cost, schedule, and technical information in a meaningful, coherent fashion is a challenge for most programs. In addition to comparing budgeted to actual costs, EVM measures the value of work accomplished in a given period. This technique compares the earned value with the planned value of work scheduled and with the actual cost of work accomplished for that period. Differences in these values are measured in both cost and schedule variances. Cost variances compare the earned value of the completed work with the actual cost of the work performed. For example, if a contractor completed $5 million worth of work and the work actually cost $6.7 million, there would be a –$1.7 million cost variance. Schedule variances are also measured in dollars, but they compare the earned value of the work completed with the value of work that was expected to be completed. For example, if a contractor completed $5 million worth of work at the end of the month but was budgeted to complete $10 million worth of work, there would be a –$5 million schedule variance. Positive variances indicate that activities are costing less or are completed ahead of schedule, whereas negative variances indicate activities are costing more or are falling behind schedule. These cost and schedule variances can be used to estimate the cost and time needed to complete a program. Without knowing the planned cost of completed work (that is, the earned value), it is difficult to determine a program’s true status. Earned value provides information necessary for understanding the health of a program; it provides an objective view of program status. As such, it can alert program managers to potential problems sooner than expenditures alone can, thereby reducing the chance and magnitude of cost overruns and schedule delays. Moreover, EVM directly supports the institutionalization of key processes for acquiring and developing systems and the ability to effectively manage investments—areas which are often found to be inadequate based on our assessments of major IT investments. Because of the importance of ensuring quality earned value data, in May 1998 the American National Standards Institute (ANSI) and the Electronics Industries Alliance (EIA) jointly established a national standard for EVM systems. This standard delineates 32 guidelines on how to establish a sound EVM system, ensure that the data coming from the system are reliable, and use the earned value data to manage the program. See appendix III for details on the 32 guidelines. In June 2002, OMB’s Circular A-11 included the requirement that agencies use a performance-based acquisition management system based on the May 1998 ANSI/EIA Standard to obtain timely information regarding the progress of capital investments. This requirement was restated in subsequent versions of the circular and, in August 2005, OMB issued a memorandum that outlined steps that agencies must take for all major and high-risk development projects to better ensure improved execution and performance and to promote more effective oversight through the implementation of EVM. Specifically, this guidance directs agencies to: 1. develop comprehensive policies to ensure that agencies are using EVM to plan and manage development activities for major IT investments, 2. include a provision and clause in major acquisition contracts or agency in-house project charters directing the use of an EVM system that is compliant with the ANSI standard, 3. provide documentation demonstrating the EVM system complies with 4. conduct periodic surveillance reviews, and 5. conduct integrated baseline reviews on individual programs to finalize the cost, schedule, and performance goals. Building on OMB’s guidance, in July 2007, we issued an exposure draft on best practices for estimating and managing program costs. This draft highlights policies and practices adopted by leading organizations to implement an effective EVM program. Specifically, the guidance identifies the need for organizational policies to require clear criteria for which programs are required to use EVM, compliance with the ANSI standard, a standard product-oriented structure for defining work products, integrated baseline reviews, specialized training, criteria and conditions for rebaselining programs, and an ongoing surveillance function. In addition, the guidance identifies key practices that individual programs can use to ensure that they establish a sound EVM system, that the earned value data are reliable, and that they are used to support decision making. OMB refers to this guide as a key reference manual for agencies in its 2006 Capital Programming Guide. Treasury’s approach to EVM involves several entities, including the Office of the Chief Information Officer (OCIO), the Office of the Procurement Executive—both of which are under the Assistant Secretary for Management and Chief Financial Officer, and Capital Planning and Investment Control (CPIC) desk officers. Responsibility for the administration and maintenance of Treasury’s EVM policy lies with the OCIO. Specifically, the CPIC group within that office supports the department’s investment management oversight process. CPIC desk officers are responsible for oversight of one or more bureaus and serve as the bureau CPIC coordinator’s primary point of contact, responsible for scoring exhibit 300s and coordinating information sharing with the departmental budget office and other critical partners. Further, they develop bureau-level IT portfolio expertise and provide input and recommendations to the bureaus, Treasury’s CIO, and Treasury’s Investment Review Board. Working with the OCIO to identify acquisitions which require earned value management, the Office of the Procurement Executive is responsible for ensuring that the identified acquisitions throughout Treasury and its bureaus contain EVM requirements that are consistent with the Federal Acquisition Regulation. According to agency officials, 40 investments are currently using EVM. Project managers and contractors are required to gather the monthly costs and progress associated with each of their investments. The information gathered includes the planned value, actual costs, and earned value. This information is analyzed and used for corrective actions at the bureau level. Quarterly, the bureaus forward investment performance reports to the OCIO’s CPIC office, which reviews them and forwards summaries to Treasury’s Technical Investment Review Board. In January 2008, Treasury convened an EVM working group, which has representation from every bureau. According to the CPIC Director, the working group has several objectives including establishing (1) level of reporting for contractors and government employees based on thresholds; (2) rule sets, processes, and procedures for the development of work breakdown structures, integrated baselines, standard roll-up into milestones, and the use of EVM systems at the bureaus; (3) bureau monthly recordkeeping requirements; (4) standard procedures for quarterly uploading of data from the bureaus into Treasury’s automated investment management tool; and (5) requirements for maintaining documentation to support the project manager validations and bureau CIO certifications of cost, schedule, and performance data for major and nonmajor investments. According to the CPIC Director, the working group is also working on revising the department’s EVM policy. While Treasury has established policy to guide its implementation of EVM, key components of this policy are not fully consistent with best practices. Without a comprehensive policy, the department risks implementing policies inconsistently and using inaccurate cost and schedule performance data. We recently reported that leading organizations establish EVM policies that: establish clear criteria defining which programs are to use EVM; require programs to comply with a national ANSI standard; require programs to use a standard structure for defining work products; require programs to conduct detailed reviews of expected costs, schedules, and deliverables (called an integrated baseline review); require and enforce EVM training; define when programs may revise cost and schedule baselines (called require system surveillance—routine validation checks to ensure that major acquisitions are complying with agency policies and standards. Table 1 further describes these seven key components of an effective EVM policy. In December 2005, Treasury developed the EVM Policy Guide, which provides an approach for implementing EVM requirements for the department’s major investments. The policy currently in place fully addresses three of the seven components, partially addresses three, and does not address one (see table 2). Specifically, Treasury has policies and guidance that fully address criteria for implementing EVM on all major investments, for the conduct of integrated baseline reviews, and for rebaselining. Criteria for implementing EVM on all major investments: The department’s policy requires all of its major development, modernization, and enhancement investments to use EVM. Investments in steady-state (i.e., those with no development, modernization, or enhancement milestones) and those ending prior to September 2007 were not required by the department to implement the EVM requirement. Integrated baseline review: In order to verify whether the performance measurement baseline is realistic and to ensure that the government and contractor mutually understand program scope, schedule, and risks, Treasury’s policy calls for an integrated baseline review. According to the policy, this review should be completed as soon as possible but no later than 6 months after the contract is awarded. Furthermore, another review may be required following any significant contract modifications. Rebaselining criteria: Treasury developed a rebaselining policy which specifies that a valid reason for requesting a new baseline must be clearly understood and documented. The policy also specifies acceptable reasons for an investment team to request a rebaseline. Further, to submit a rebaseline request, investment teams are required to explain why the current plan is no longer feasible and develop realistic cost and schedule estimates for remaining work that has been validated and spread over time to the new plan. However, Treasury’s policy and guidance do not fully address the best practices represented by the following three key components: addressing compliance with the ANSI standard, defining a meaningful structure for defining work products, and conducting system surveillance reviews. Training is not addressed by the Treasury policy. Compliance with the ANSI standard: Treasury policy states that major investments are to comply with ANSI standards. Further, it outlines processes and guidelines to assist its bureau in achieving ANSI-compliant processes. However, the policy lacks sufficient detail for addressing some of the criteria defined in the standard, including the use of standard methods for EVM data collection across the department and cost performance reporting. For example, the policy does not discuss the use of templates or tools to help ensure that EVM data are collected consistently and reliably. Furthermore, the policy does not discuss what cost performance report formats are to be used. Until Treasury’s policy includes a methodology that standardizes data collection and reporting, data integrity and reliability may be in jeopardy and management may not be able to make informed decisions regarding the investments and its next steps. Standard structure for defining the work products: Treasury’s EVM policy calls for a product-oriented work breakdown structure that identifies and documents all activities associated with the investment. However, it does not require the use of common elements in its development. According to the CPIC Director, Treasury’s EVM working group plans to establish rule sets, processes, and procedures for the development of work breakdown structures. Until Treasury’s policy provides more guidance on the systematic development and documentation of work breakdown structures including the incorporation of standardized common elements, it will be difficult to ensure that the entire effort is consistently included in the work structure and that investments will be planned and managed appropriately. System surveillance: According to Treasury’s policy, the contractor’s EVM system is to be validated using the industry surveillance approach identified by the National Defense Industrial Association’s Surveillance Guide. Additionally, Treasury is to require clear evidence that the system continues to remain compliant or that the contractor has brought the system back into compliance. However, the policy lacks guidance on conducting surveillance reviews on the government’s (i.e., the department’s) EVM system. Until Treasury’s policy specifies reviews of the government’s systems, Treasury risks not being able to effectively manage cost, schedule, and technical performance of its major investments. Training requirements: Treasury’s policy does not specify EVM training requirements for program management team members or senior executives. Furthermore, the policy does not require the agency to maintain training logs confirming that all relevant staff have been appropriately trained. Until the department establishes policy for EVM training requirements for relevant personnel, it cannot effectively ensure that its program staff have the appropriate skills to validate and interpret EVM data and that its executives fully understand the data they are given in order to ask the right questions and make informed decisions. According to the CPIC Director, Treasury’s EVM working group, which was established in January 2008, is working on the development of a revised EVM policy, which, according to Deputy Assistant Secretary for Information Systems and Chief Information Officer, is expected to be finalized by October 2008. Addressing these weaknesses could help Treasury optimize the effective use of EVM. While the six programs we reviewed were all using EVM, none had fully implemented any of the practices for establishing a comprehensive EVM system, ensuring that the data resulting from the system are reliable, or using earned value data for decision-making purposes. These weaknesses exist in part because, as previously noted, Treasury’s policy does not fully address key elements and because the department does not have a mechanism to enforce its implementation. Until Treasury adequately implements EVM, it faces an increased risk that some programs will experience cost and schedule overruns or deliver less capability than planned. In our work on best practices, we identified three key management areas that leading organizations use to manage their acquisitions: establishing a comprehensive EVM system, ensuring reliable data, and using earned value data to manage the investment (see table 3). Table 4 provides a summary of how each investment is using EVM in the key practices areas and is followed by our analysis of these areas. The investments we reviewed are the Financial Management Service’s FedDebt and Financial Information and Reporting Standardization (FIRST); the Departmental Office’s DC Pension System to Administer Retirement (STAR); the Bureau of Public Debt’s Treasury Automated Auction Processing System (TAAPS); and the Internal Revenue Service’s Integrated Financial System/Core Financial System (IFS) and Enterprise Data Access Strategy (EDAS). These investments were identified by the department as major investments and all had milestones in development, modernization, or enhancement at the time of our review. Appendix II includes information regarding the selection of these investments and appendix IV provides a description of each. Comprehensive EVM systems were not consistently established to manage the six investments. Although aspects of a comprehensive system were present, none of the investments fully met all the best practices comprising this management area. For example, of the six investments, only IFS and STAR adequately defined the scope of effort using a work breakdown structure. Three investments developed a work breakdown structure; however, the work packages could not be traced back to EVM project management documents, such as the project management baseline, the work breakdown structure, and the statement of work or project charter. For example, although EDAS had detailed work breakdown structures, correlation could not be established among the work breakdown structure elements, the contract deliverables, and the elements being reported in the contract performance reports. Officials for the remaining investment— TAAPS—stated that there was a documented work structure; however, they did not provide evidence of this. As another example, performance measurement baselines were developed for five of six investments. However, the baselines had noted weaknesses. Specifically, four investments—FIRST, IFS, STAR, and TAAPS—had a baseline, but some elements were not included, such as planned costs for STAR. Further, for TAAPS, independent validation of the investment’s baseline was not conducted. FedDebt had a performance measurement baseline which underwent integrated baseline validation in March 2006. However, the validation indicated that there was no time-phased planned value at the individual contract level, nor was there a roll-up at the program level. No explanation was provided of how the monthly performance data on individual FedDebt projects were rolled up to the investment level as required by OMB. Further, EDAS did not have a time- phased budget baseline or a performance measurement baseline. None of the six investments fully implemented the steps to ensure data reliability from their EVM systems. Five partially implemented the steps, and one investment—EDAS—did not meet any of the steps. When executing work plans and recording actual costs, two of the six investments incorporated government costs with contractor costs. For example, FedDebt included both government and contractor costs in their quarterly reporting. However, while IFS had a mechanism for recording monthly government costs, it did not have a method that combined both contractor and government costs for review on a monthly basis. Also, few if any checks are performed to measure the quality of EVM data and, according to agency officials, Treasury currently focuses more on reporting the data than on their reliability. In addition, five of the six investments did not adequately analyze performance data and record the variances from the baseline. The IFS investment included monthly reviews of performance reports and included cost and schedule variances. The remaining investments conducted analyses of performance data, but did not all provide documentation to show cost and schedule updates and variances. For example, according to officials, TAAPS’ cost and schedule variances were calculated at the project and program levels, but evidence of this could not be provided. Further, as part of its performance reporting, STAR did not calculate the cost variance and incorrectly calculated the schedule variance. None of the six investments fully implemented the two practices needed to ensure the use of EVM data for decision-making purposes. Specifically, EDAS did not take management action to mitigate risks identified through their EVM performance data, or update the performance measurement baseline as changes occurred; IFS addressed one of these practices, and the remaining investments only partially addressed them. In order to support management action to mitigate risks identified through EVM performance data—variance analysis, corrective action planning, and reviewing estimates at completion—the IFS project manager was provided with monthly performance report that indicated when cost and/or schedule variances exceeded acceptable tolerances. Further, investment- level status was provided to bureau-level and agency-level management to allow them to make capital planning and investment control decisions. However, the remaining five investments did not fully take action to mitigate risks for a variety of reasons. For example, for FedDebt, although some monthly EVM data were included in quarterly reports, no documentation was provided on how such data were being used to manage at the project or investment level. A similar situation exists for the TAAPS investment where, although agency officials stated that meetings were routinely held to discuss performance issues, no evidence was provided that a systematic method existed to use EVM metrics for decision-making purposes. Regarding the update of performance measurement baselines as changes occur, one investment team stated that it did not have any baseline changes; however, documentation showed that the schedule for the investment had been changed three times. In addition, although IFS maintained a log for tracking changes, we could not determine that these changes had been incorporated into the baseline. Further, according to officials, EDAS had a scope change in fiscal year 2007; however, the investment team was not able to provide documentation reflecting the corresponding change in the performance measurement baseline. The inconsistent application of EVM across investments exists in part because the department does not have a policy that fully addresses key components including training and system surveillance and because the department is leaving the implementation of the policy largely up to bureaus. For example, project management staff had not consistently received training, an item which is not addressed in the policy, and surveillance reviews, which are partially addressed in the policy, had not been performed for any of the investments. Furthermore, the department does not have a process for ensuring effective EVM implementation. However, in comments on a draft of this report, the Deputy Assistant Secretary for Information Systems and Chief Information Officer stated that the department is working with the bureaus to establish mechanisms and tools to ensure full compliance with the provisions of the updated EVM policy, which is to be finalized by October 2008. These mechanisms and tools would help address the implementation gaps we have identified. Treasury has established a policy that addresses criteria for implementing EVM, integrated baseline reviews, and project rebaselining consistent with best practices. However, it does not fully address other elements including compliance with the ANSI standard and system surveillance, which are necessary for effective implementation. In regards to implementation, the department is not fully addressing key practices needed to effectively manage its critical investments. Specifically, none of the six programs we reviewed were fully implementing any of the practices associated with establishing a comprehensive EVM system, ensuring the reliability of the data resulting from the system, or using earned value data to make decisions. The gaps in implementation are due in part to the weaknesses with the policy and to the low level of oversight provided by the department. Until the department defines a comprehensive policy and establishes a process for ensuring effective EVM implementation, it will be difficult for Treasury to optimize the effectiveness of EVM as a management tool and consistently implement the fundamental practices needed to effectively manage its critical programs. To improve Treasury’s ability to effectively implement EVM on its IT acquisition programs, we recommend that the Secretary of Treasury direct the Assistant Secretary for Management, in collaboration with the Chief Information Officer, to take the following nine actions: Define a comprehensive EVM policy that specifies a methodology that standardizes EVM data collection and reporting compliant with the ANSI standard; a systematic approach to the development and documentation of work breakdown structures including the incorporation of standardized common elements; guidance on conducting surveillance reviews on the government’s EVM training requirements for relevant personnel. Implement a process for ensuring effective implementation of EVM throughout the department by establishing a comprehensive EVM system by, among other things, defining the scope of effort using a work breakdown structure that allows for traceability across EVM project management documents; ensuring the development of validated performance measurement baselines that includes planned costs and schedules; ensuring that the data resulting from the EVM system are reliable, including executing the work plan and recording both government and ensuring that the program management team is using earned value data for decision-making by systematically using EVM performance metrics in making the ongoing monthly decisions required to effectively manage the investment; and properly documenting updates to the performance measurement baseline as changes to the cost and schedule occur. In written comments on a draft of this report, the Department of Treasury’s Deputy Assistant Secretary for Information Systems and Chief Information Officer generally agreed with our findings and stated that the department will issue a revised version of the EVM policy that will address our nine recommendations by October 2008. He also noted that the department is working with the bureaus to establish mechanisms and tools including processes for conducting system surveillance and monitoring of EVM data to ensure compliance with the policy. Treasury also provided technical comments which we have addressed as appropriate. Treasury's written comments are reprinted in appendix I. We will be sending copies of this report to interested congressional committees, the Secretary of Treasury, and other interested parties. In addition, the report will be available at no charge on our Web site at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-9286 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to determine whether the Department of the Treasury and its key component agencies (1) have the policies in place to effectively implement earned value management (EVM) and (2) are adequately using EVM techniques to manage critical system investments. To assess whether Treasury has policies in place to effectively implement EVM, we analyzed Treasury and its component bureaus’ policies and guidance that support EVM implementation departmentwide as well as on capital planning and investment control guidance. Specifically, we compared these policies and guidance documents to both Office of Management and Budget requirements and key best practices recognized within the federal government and industry for the implementation of EVM. These best practices are contained in an exposure draft version of our cost guide. We also interviewed key agency officials, including the Director for Capital Planning and Investment Control, to obtain information on the agency’s ongoing and future EVM plans. To determine whether Treasury is adequately using EVM techniques to manage critical system investments, we reviewed 6 of the 40 systems the department required to use EVM. Specifically, we selected investments from each of the four component agencies identified as having eligible investments. We selected one investment from the Bureau of Public Debt, another from Departmental Offices, and two from the Financial Management Service and the Internal Revenue Service since they had a greater percentage of investments using EVM. With the exception of the Bureau of Public Debt which had only one major investment, we selected investments based on (1) size, (2) EVM history (i.e., use of EVM for a long enough period of time to have some history of EVM data), and (3) completion date (i.e., those that would not end during the course of our review). The 6 projects selected were FedDebt and Financial Information and Reporting Standardization from the Financial Management Service, DC Pension System to Administer Retirement (STAR) from the Departmental Offices, Treasury Automated Auction Processing System (TAAPS) from the Bureau of Public Debt, and Integrated Financial System/Core Financial System and Enterprise Data Access Strategy from the Internal Revenue Service. Our review was not intended to be generalizable, but instead to illustrate the status of a variety of programs. To determine the extent of each program’s implementation of sound EVM, we compared program documentation to the 11 fundamental EVM practices implemented on acquisition programs of leading organizations, as identified in the Cost Assessment Guide. We determined whether the program fully implemented, partially implemented, or did not implement each of the practices. Finally, we interviewed program officials to obtain clarification on how EVM practices are implemented and how the data are validated and used for decision-making purposes. Regarding the reliability of cost data, we did not test the adequacy of agency or contractor cost- accounting systems. Our evaluation of these cost data was based on what we were told by the agency and the information they could provide. We conducted this performance audit from August 2007 to July 2008 in Washington, D.C., in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Organizations must be able to evaluate the quality of an EVM system in order to determine the extent to which the cost, schedule, and technical performance data can be relied on for program management purposes. In recognition of this, the American National Standards Institute (ANSI) and the Electronics Industries Alliance (EIA) jointly established a national standard for EVM systems—ANSI/EIA 748-B (commonly referred to as the ANSI standard). This standard consists of 32 guidelines addressing organizational structure; planning, scheduling, and budgeting; accounting considerations; analysis and management reports; and revisions and data maintenance. These standards comprise three fundamental management functions for effectively using EVM: establishing a sound earned value management system, ensuring that the EVM data are reliable, and using earned value data for decision-making purposes. Table 5 lists the management functions and the guidelines. Below is a description of the six investments we reviewed to assess whether the department is adequately using EVM techniques to manage critical system investments. FedDebt supports the federal government’s delinquent debt collection programs, which were centralized in the Financial Management Service (FMS) pursuant to the Debt Collection Improvement Act of 1996. FedDebt also supports Treasury’s strategic goal to manage the U.S. Government’s finances effectively and the FMS strategic goal to maximize collection of government delinquent debt by providing efficient and effective centralized debt collection services. FedDebt plans to integrate the collection services that FMS provides to Federal Program Agencies through its other programs. FIRST is intended to automate the maintenance and distribution of the U.S. Standard General Ledger accounting rules and guidance. It also plans to integrate the general ledger guidance with the collection of all accounting trail balance data, thus providing a standardized method of collecting, storing, reporting, and analyzing such data. Furthermore, the investment is expected to facilitate accounting validations of the agency trial balance data to provide better feedback to agencies concerning the accuracy and consistency of these data. STAR is to assist Treasury and the District of Columbia Government by automating the determination of eligibility, calculating pension benefits, and delivery of payments, therefore allowing for (1) increased accuracy of pension benefit calculations and (2) improved customer service. Key functionality for this investment includes serving annuitants and survivors of the Judges Pension Plan; making benefit payments to 11,000 teachers, police, and firefighters who retired before July 1997, as well as their survivors; and automatically calculating the gross annuity and split benefit payment for teachers, police, and firefighters, to service those annuitants who retired after June 1997. TAAPS is intended to ensure that all auction-related operations are carried out flawlessly and securely. Key among auction activities are the announcement of upcoming Treasury auctions; bid submission and processing; calculation of awards; publication of results; creation and dissemination of settlement wires; creation of accounting reports and reports needed for auctions analysis; and the storage of all securities-, bidder-, and auction-related information. TAAPS is expected to make numerous intersystem interfaces and manual processes obsolete by consolidating auction processing requirements into one system and providing appropriate backup and disaster recovery systems and services. IFS is intended to operate as the Internal Revenue Services’ (IRS) new accounting system of record, replacing IRS’s core financial systems, including expenditure controls, accounts payable, accounts receivable, general ledger, budget formulation, and purchasing controls. IRS intends to upgrade to software that provides federal accounting functionality. By migrating to federal accounting practices, IFS is to provide benefits, such as eliminating current work-around processes, improving project management capability, and enhancing budget reports. EDAS is intended to consolidate data from multiple Business Systems Modernization applications and produce a consolidated data repository source to be used for issue detection and case selection. The goal is to develop integrated data solutions that allow IRS to retire duplicative and costly data extracts. The first major project is to develop an Integrated Production Model as a central repository for corporate data and make those data available to projects currently in development. Long-term benefits include the retirement of multiple systems and efficiency gains from improved processes. In addition to the contact named above, Sabine Paul, Assistant Director; Neil Doherty; Mary D. Fike; Nancy Glover; Sairah R. Ijaz; Rebecca LaPaze; and Paul B. Middleton made key contributions to this report. | In 2008, the Department of Treasury (Treasury) plans to spend approximately $3 billion on information technology (IT) investments--the third largest planned IT expenditure among civilian agencies. To more effectively manage such investments, in 2005 the Office of Management and Budget required agencies to use earned value management (EVM). EVM is a project management approach that, if implemented appropriately, provides objective reports of project status, produces early warning signs of impending schedule delays and cost overruns, and provides unbiased estimates of a program's total costs. GAO was asked to assess whether the department and its key component agencies (1) have the policies in place to effectively implement EVM and (2) are adequately using EVM techniques to manage critical system investments. GAO compared agency policies to best practices identified in the Cost Assessment Guide and reviewed the implementation of key EVM practices for several investments. The Department of Treasury's EVM policy is not fully consistent with best practices. Specifically, of seven best practices that leading organizations address in their policies, Treasury's policy fully addresses three, partially addresses three, and does not address the training component. According to the Director for Capital Planning and Investment Control, the department is currently working on revising its policy and according to Deputy Assistant Secretary for Information Systems and Chief Information Officer expects to finalize it by October 2008. Until Treasury develops a comprehensive policy to guide its efforts, it will be difficult for the department to optimize the effectiveness of EVM as a management tool. The department and its bureaus are not fully implementing key EVM practices needed to effectively manage their critical system investments. Specifically, the six programs at Treasury that GAO reviewed were not consistently implementing practices needed for establishing a comprehensive EVM system, ensuring that data from the system are reliable, and using the data to help manage the program. For example, when executing work plans and recording actual costs, a key practice for ensuring that the data resulting from the EVM system are reliable, only two of the six investments reviewed incorporated government costs with contractor costs. These weaknesses exist in part because Treasury's policy is not comprehensive and because the department does not have a process for ensuring effective EVM implementation. Unless the department consistently implements fundamental EVM practices, it may not be able to effectively manage its critical programs. |
NASA’s Vision for Space Exploration calls for a return of humans to the Moon and eventual human spaceflight to Mars. In September 2005, NASA outlined an initial architecture for implementing the Vision in its Exploration Systems Architecture Study (ESAS). NASA is implementing this architecture under the Constellation program. Among the first major efforts of this program are the developments of new space flight systems—including the Ares I Crew Launch Vehicle and the Orion Crew Exploration Vehicle. Ares I and Orion are currently targeted for operation no later than 2015 (see fig. 1). As illustrated by figure 1 above, the Constellation program, including the Ares I and Orion projects, is approaching the end of the formulation phase of NASA’s acquisition life-cycle for spaceflight programs and projects. The purpose of the formulation phase is to establish a cost-effective program that is demonstrably capable of meeting the agency’s objectives. The formulation phase concludes with the preliminary design review and a non-advocate review which marks the end of the formulation phase and the beginning of the implementation phase. During the implementation phase, the program will execute plans developed during the formulation phase. Our work on best practices over the past decade has shown that success in large-scale development efforts like Constellation depends on establishing an executable business case before committing resources to a new product development effort. In its simplest form, a business case requires a balance between the concept selected to satisfy customer needs and the resources—technologies, design knowledge, funding, time, and management capacity—needed to transform the concept into a product. At the heart of a business case is a knowledge-based approach that requires that managers demonstrate high levels of knowledge as the program proceeds from technology development to system development and, finally, production. Ideally, in such an approach, key technologies are demonstrated before development begins, the design is stabilized before prototypes are built or production begins, and testing is used to validate product maturity at each level. At each decision point, the balance among time, money, and capacity is confirmed. In essence, knowledge supplants risk over time. Having adequate knowledge about requirements and resources is particularly important for a program like Constellation because human spaceflight development projects are inherently complex, difficult, and costly. We have reported on several occasions that within NASA’s acquisition framework, the preliminary design/non-advocate review—the hurdle marking transition from program formulation to program implementation—is the point at which development projects should have a sound business case in hand. NASA’s Systems Engineering Policy states that the preliminary design review demonstrates that the preliminary design meets all system requirements with acceptable risk and within the cost and schedule constraints. NASA realized that the Orion project was not ready to complete the preliminary design review process as planned and delayed its initiation from summer 2008 to summer 2009. Furthermore, although NASA officially closed the Ares I preliminary design review process in September 2008, it deferred resolution of the thrust oscillation issue until the Constellation program preliminary design review in March 2010. The business case is the essential first step in any acquisition program that sets the stage for the remaining stages of a program, namely the business or contracting strategy and actual execution or performance. If the business case is not sound, execution may be subpar. This does not mean that all potential problems can be eliminated and perfection achieved, but rather that sound business cases can help produce better outcomes and better return on investment. If any one element of the business case is weak, problems are more likely in implementation. Thus far in the Constellation program, the failure of NASA to establish a sound business case for both the Ares I and Orion projects early is manifesting itself in schedule delays and cost increases. The Constellation program has not yet developed all of the elements of a sound business case needed to justify entry into implementation. Progress has been made; however, technical and design challenges are still significant and until they are resolved NASA will not be able to reliably estimate the time and money needed to execute the program. In addition, cost issues and a poorly phased funding plan continue to hamper the program. Consequently, NASA is changing the acquisition strategy for the Orion project as the agency attempts to increase confidence in its ability to meet a March 2015 first crewed launch. However, technical design and other challenges facing the program are not likely to be overcome in time to meet the 2015 date, even with changes to scope and requirements. Technical and design challenges within the Constellation are proving difficult, costly, and time intensive to resolve. The Constellation program tracks technical challenges in its Integrated Risk Management Application (IRMA). NASA procedures recommend that programs identify and track risks as part of continuous risk management. As of June 9, 2009, IRMA was tracking 464 risks for Ares I and Orion—207 high risks, 206 medium risks, and 51 low risks. We have reported on some of these areas of technical challenge in the past, including thrust oscillation, thermal protection system, common bulkhead, and J-2X nozzle extension. In addition to these challenges, our recent work has highlighted other technical challenges, including Orion mass control, vibroacoustics, lift-off drift, launch abort system, and meeting safety requirements. While NASA has made progress in resolving each of these technical challenges, significant knowledge gaps remain in each of these areas. Descriptions of these technical challenges follow. Thrust oscillation, which causes shaking during launch and ascent, occurs in some form on every solid rocket engine. Last year, we reported that computer modeling indicated that there was a possibility that the thrust oscillation frequency and magnitude may be outside the limits of the Ares I design and could potentially cause excessive vibration in the Orion capsule. Agency officials stated that thrust oscillation is well understood and they are pursuing multiple solutions. These include incorporating a passive damping system inside the first stage solid rocket booster aft skirt that will act like a shock absorber during launch; adding a composite structure and springs between the first and second stages to isolate the upper stage and crew vehicle from the first stage; and could possibly use the upper stage propellant fuel tanks to offset thrust oscillation in the first stage. Officials said that NASA will be unable to verify the success of solutions until thrust oscillation occurs during an integrated flight. Officials noted that because thrust oscillation is not expected to occur in every flight, it is difficult to forecast when the solutions will be verified. The Orion vehicle requires a large-scale ablative heat shield, at the base of the spacecraft, to survive reentry from earth orbit. These heat shields burn up, or ablate, in a controlled fashion, transporting heat away from the crew module during its descent through the atmosphere. NASA is using an ablative material derived from the substance used in the Apollo program. After some difficulties, NASA was successful in recreating the material. Because it uses a framework with many honeycomb-shaped cells, each of which must be individually filled without voids or imperfections, it may be difficult to repeatedly manufacture to consistent standards. According to program officials, during the Apollo program the cells were filled by hand. The contractor plans to automate the process for the Orion Thermal Protection System, but this capability is still being developed. The common bulkhead separates the hydrogen and oxygen fuel within the Ares I upper stage fuel tank. The initial Ares I design employed a simpler two-tank configuration with lower manufacturing costs but did not meet mass requirements. According to project officials, the common bulkhead represents the critical path in both the development and manufacturing of the upper stage. Lessons learned from the Apollo program indicate that common bulkheads are complex and difficult to manufacture and recommend against their use. According to NASA officials, the difficulty of designing and manufacturing common bulkheads stems from the sheer size of components and the tight tolerances to which they must be manufactured. To accelerate the manufacturing process NASA is exploring using an oven with a vacuum bag instead of an autoclave to bond and cure the metallic and composite materials used in the manufacture of the common bulkhead. If this process proves unsuccessful, the program may encounter schedule delays. We have reported in prior years that although the J-2X engine is based on the J-2 and J-2S engines used on the Saturn V and leverages knowledge from subsequent engine development efforts, the number of planned changes is such that, according to NASA review boards, the effort essentially represents a new engine development. A risk within this development is a requirement for a nozzle extension to meet performance requirements. NASA originally planned to pursue a composite nozzle. However, NASA eliminated the composite nozzle extension from the J-2X design because of cost and other considerations, and went with a unique aluminum alloy design, which, according to agency officials, should reduce costs, but has the potential to decrease engine performance and increase mass. Analysis indicates that the alloy nozzle is more likely to be affected by heat than a composite nozzle. In essence, while the alloy nozzle should withstand the heat environment, the composite nozzle allowed for improved performance margins. According to officials, to mitigate the potential problem, NASA is using a proven aluminum alloy with a honeycomb design, similar structurally to the Space Shuttle external tank, which will reduce weight. Contractor officials stated that they will continue to modify the nozzle design as test results are received and analyzed. Controlling for mass has led to significant design changes to the Orion vehicle. Our previous work has shown that controlling for mass is a key factor in the development of space systems. As the mass of a particular system increases, the power or thrust required to launch that system will also increase. This could result in the need to develop additional power or thrust capability to lift the system, leading to additional costs, or to stripping down the vehicle to accommodate current power or thrust capability. For example, NASA went through the process in 2007 to zero- base the design for the Orion to address mass concerns. In its efforts to reduce the mass of the Orion vehicle, NASA chose to move from land nominal landing to water nominal landing to reduce mass by eliminating air bags and, according to officials, by reducing the number of parachutes. NASA also incorporated jettisonable, load-bearing fairings into the Orion’s service module design that, according to officials, saved 1,000 pounds. This change, however, increased development risk because the fairing design has no historical precedent and the fairing panels may not deploy properly and could recontact the Orion vehicle or the Ares I rocket after they are jettisoned. Another issue related to vibration is vibroacoustics—the pressure of the acoustic waves—produced by the firing of the Ares I first stage and the rocket’s acceleration through the atmosphere—which may cause unacceptable structural vibrations throughout Ares I and Orion. According to agency officials, NASA is still determining how these vibrations and acoustic environments may affect the vehicles. NASA is concerned that severe vibroacoustics could force NASA to qualify Ares I and Orion components to higher vibration tolerance thresholds than originally expected. For example, if current concerns are realized, key subsystems within the Upper Stage would be unable to meet requirements, would fail qualification testing, and would have to be redesigned. Analysis of the Ares I flight path as it lifts off from the launch pad indicates the rocket may drift during launch and could possibly hit the launch tower or damage the launch facilities with the rocket plume. Factors contributing to lift-off drift include wind speed and direction, misalignment of the rocket’s thrust, and duration of lift-off. NASA plans to establish a clear, safe, and predicted lift-off drift curve by steering the vehicle away from the launch tower and not launching when southerly winds exceed 15 to 20 knots. NASA continues to address challenges designing the launch abort system, which pulls the Orion capsule away from the Ares I launch vehicle in the case of a catastrophic problem during launch. The Orion contractor had trouble finding a subcontractor who could design and build a working attitude control motor that steers the system during an abort. According to agency officials, previous attitude control motors have had 700 pounds of thrust, while the requirement for the attitude control motor is 7,000 pounds of thrust. Developing an attitude control motor with high levels of thrust and long burn durations that is steerable is proving to be a difficult technical challenge. A year after the initial contract was awarded, the first subcontractor did not have a viable design and had to be replaced. The current subcontractor, however, is making progress. For example, although the valves used by the complex steering system failed during high-thrust testing in April 2008, redesigned valves have subsequently passed two high-thrust tests. Orion’s safety requirements are no more than one loss of crew event in 1,700 flights and one loss of mission event for every 250 flights for the ISS mission. According to Orion officials, these requirements are an order of magnitude higher than the Space Shuttle’s safety requirements, were arbitrarily set by ESAS, and may be unattainable. According to the Constellation program manager, NASA has added robustness to current systems as well as redundant systems to increase safety margins. However, these added redundancies and system robustness have added mass to the system. The technical challenges presented here do not capture all of the risks, technical or programmatic, which the Constellation program faces. As noted earlier, there are over 200 risks categorized as “high” for the Ares I/Orion programs, meaning that if not successfully mitigated, these risks (1) are either nearly certain, highly likely, or may occur, and (2) will have major effects on system cost, schedule, performance, or safety. These risks range in nature from highly complex technical risks, such as those noted above, to straightforward programmatic risks related to areas such as transitioning support work from the Marshall Space Flight Center to Michoud Assembly Facility for long-term vehicle production, compressing the software development cycle for the Orion vehicle, and creating a test program for Orion’s communication and tracking system. The Constellation program’s poorly phased funding plan has affected the program’s ability to deal with technical challenges. In our October 2007 report, we noted that NASA initiated the Constellation program recognizing that the agency’s total budget authority would be insufficient to fund all necessary activities in fiscal years 2009 and 2010. NASA’s funding strategy relied on the accumulation of a large rolling budget reserve in fiscal years 2006 and 2007 to fund Constellation activities in fiscal years 2008, 2009, and 2010. Thereafter, NASA anticipated that the retirement of the space shuttle program in 2010 would free funding for the Constellation program. In our October 2007 report, we noted that NASA’s approach to funding was risky and that the approved budget profile at that time was insufficient to meet Constellation’s estimated needs. The Constellation program’s integrated risk management system also identified this strategy as high risk and warned that funding shortfalls could occur in fiscal years 2009 through 2012, resulting in planned work not being completed to support schedules and milestones. According to project officials, these shortfalls limited NASA’s ability to mitigate technical risks early in development and precluded the orderly ramp-up of workforce and developmental activities. According to the Constellation program manager, these funding shortfalls are reducing his flexibility to resolve technical challenges. The Constellation program tracks unfunded risk mitigation—engineering work identified as potentially needed but not currently funded—as cost threats in IRMA. The Constellation IRMA system currently tracks 192 cost threats for the Ares I and Orion projects totaling about $2.4 billion through fiscal year 2015. Of this $2.4 billion, NASA classifies 35 threats valued at about $730 million as likely to be needed, 54 threats valued at about $670 million as may or may not be needed, and 103 threats valued at about $1 billion as not likely to be needed. Our analysis of the cost threats indicates these cost threats may be understated. For example, of the 157 threats classified as may or may not be needed or not likely to be needed, IRMA likelihood scores indicate that 69 cost threats worth about $789 million are either highly likely or nearly certain to occur. Some examples of cost threats include $4.7 million to develop and mature Orion’s data network technology and $12.5 million for an Upper Stage and First Stage separation test. The cost of the Constellation program’s developmental contracts have increased as NASA added new effort to resolve technical and design challenges. Constellation program officials and contractor cost reports indicate that the new effort has increased the value of the Constellation program’s developmental contracts from $7.2 billion in 2007 to $10.2 billion in June 2009. Some of these modifications remained undefinitized for extended periods as NASA worked through design issues and matured program requirements in response to technical challenges. Undefinitized contract actions authorize contractors to begin work before reaching a final agreement on contract terms. By allowing undefinitized contract actions to continue for extended periods, NASA loses its ability to monitor contractor performance because the cost reports are not useful for evaluating the contractor’s performance or for projecting the remaining cost of the work under contract. With a current, valid baseline, the reports would indicate when cost or schedule thresholds had been exceeded, and NASA could then require the contractor to explain the reasons for the variances and to identify and take appropriate corrective actions. Yet, NASA allowed high-value modifications to the Constellation contracts to remain undefinitized for extended periods, in one instance, more than 13 months. In August 2008, when faced with cost increases and funding shortfalls, the Constellation program responded by reducing program reserves and deferring development effort and test activities. These changes resulted in a minimized flight test program that was so success oriented there was no room for test failures. During the course of our review, NASA test officials expressed multiple concerns about the test approach the program was then pursuing. NASA test officials also expressed concerns about the sufficiency of planned integrated system flight testing. NASA was planning only one integrated system flight test prior to the first crewed flight. Officials stated that while NASA would have been able to address each of the programs’ specific test objectives during the planned flight tests, additional integrated system flight tests could have provided the agency increased confidence that the system performed as planned and allowed the agency the opportunity to design and implement solutions to performance problems without affecting the first crewed flight. According to agency officials, any problems encountered during integrated system flight testing could lead to significant delays in the first crewed flight. Test officials were also concerned that the highly concurrent test schedule had significant overlap between component qualification and fabrication of flight hardware. This concurrency could have resulted in schedule slips and increased costs if any component failed qualification tests. Our past work indicates that it is unlikely that the program will complete its test program without encountering developmental problems or test failures. The discovery of problems in complex products is a normal part of any development process, and testing is perhaps the most effective tool for discovering such problems. According to the Constellation program manager, the test plan strategy for the Constellation program is currently evolving as the program reshapes its acquisition strategy to defer all work on lunar content beyond the March 2015 first crewed flight. The test strategy is likely to continue to evolve until the Constellation program’s Systems Integration Plan is finalized when the project enters the implementation phase. In response to technical challenges and cost and funding issues, NASA is changing the Orion project acquisition strategy. In December 2008, NASA determined that the current Constellation program was high risk and unachievable within the current budget and schedule. To increase its level of confidence in the Constellation program baseline NASA delayed the first crewed flight from September 2014 to March 2015 and according to officials, adopted a two-phased approach to developing the Orion vehicle. NASA’s original strategy for the Orion project was to develop one vehicle capable of supporting both ISS and lunar missions. According to the Constellation program manager, the Constellation program is currently deferring work on Orion lunar content beyond 2015 to focus its efforts on developing a vehicle that can fly the ISS mission. This phased approach, however, could require two qualification programs for the Orion vehicle— one pre-2015 Orion qualification program for ISS mission requirements and a second post-2015 Orion qualification program for lunar mission requirements. According to the program manager, the knowledge gained from flying the initial Orion to the ISS will inform the design of the lunar vehicle. The Constellation program manager also told us that NASA is unwilling to further trade schedule in order to reduce risk. He asserted that delaying the schedule is an inefficient means of mitigating risk because of the high costs of maintaining fixed assets and contractor staff. Though these changes to overarching requirements are likely to increase the confidence level associated with the March 2015 first crewed flight, they do not guarantee that the program will conduct a successful first crewed flight in March 2015. For example, in May 2009 the program announced its plan to reduce the number of crew for the ISS mission from six to four. According to project officials, NASA does not plan to finalize the preliminary design of the four-crew ISS configuration until after the Orion preliminary design review. Revising the ISS design for four crew and optimizing the area freed up by removing two crew for the ISS mission will entail additional effort on the part of the Orion design team. Furthermore, as noted above, both the Ares I and Orion projects continue to face technical and design challenges that will require significant time, money, and effort to resolve irrespective of the decision to defer lunar requirements. While deferring the lunar requirement is likely to relieve pressure on Orion’s mass margins allowing increased flexibility to deal with some Orion-specific technical challenges, the lunar requirement has little bearing on many of the Ares I technical challenges discussed above. Furthermore, it is unclear how deferring the lunar requirement will affect the technical challenges faced in the development of the Orion launch abort system and in dealing with vibroacoustics. NASA’s human spaceflight program is at a crossroads. Efforts to establish a sound business case for Constellation’s Ares I and Orion projects are complicated by (1) an aggressive schedule, (2) significant technical and design challenges, (3) funding issues and cost increases, and (4) an evolving acquisition strategy that continues to change Orion project requirements. Human spaceflight development programs are complex and difficult by nature and NASA’s previous attempts to build new transportation systems have failed in part because they were focused on advancing technologies and designs without resources—primarily time and money—to adequately support those efforts. While the current program, Constellation, was originally structured to rely on heritage systems and thus avoid problems seen in previous programs, the failure to establish a sound business case has placed the program in a poor risk posture to proceed into implementation as planned in 2010. In the past, NASA has recognized these shortfalls and has delayed design reviews for both the Ares I and Orion vehicles in an effort to gain the knowledge needed for a sound business case. NASA’s current approach, however, is based on changing requirements to increase confidence in meeting the schedule. Nevertheless, the need to establish a sound business case, wherein resources match requirements and a knowledge-based acquisition strategy drives development efforts, is paramount to any successful program outcome. Until the Constellation program has a sound business case in hand, it remains doubtful that NASA will be able to reliably estimate cost and schedule to complete the program. Meanwhile, the new Administration is conducting an independent review of NASA’s human spaceflight activities, with the potential for recommendations of broad changes to the agency’s approach toward future efforts. While the fact that the review is taking place does not guarantee wholesale changes to the current approach, it does implicitly recognize the challenges facing the Constellation program. We believe this review is appropriate as it presents an opportunity to reassess both requirements and resources for Constellation as well as alternative ways for meeting requirements. Regardless of NASA’s final plans for moving forward, the agency faces daunting challenges developing human rated spacecraft for use after the Space Shuttle is retired, and it is important that the agency lay out an acquisition strategy grounded in knowledge-based principles that is executable with acceptable levels of risk within the program’s available budget. As NASA addresses the findings and recommendations of the Review of U.S. Human Space Flight Plans Committee, we recommend that the new NASA Administrator direct the Constellation program, or its successor, to develop a sound business case—supported by firm requirements, mature technologies, a preliminary design, a realistic cost estimate, and sufficient funding and time—before proceeding into implementation, and, if necessary, delay the preliminary design review until a sound business case demonstrating the program’s readiness to move forward into implementation is in hand. In written comments on a draft of this report (see app. II), NASA concurred with our recommendation. NASA acknowledged that, while substantial work has been completed, the Constellation program faces knowledge gaps concerning requirements, technologies, funding, schedule, and other resources. NASA stated that it is working to close these gaps before committing to significant, long-term investments in the Constellation program. NASA stated that the Constellation program manager is required to demonstrate at the preliminary design review that the program and its projects meet all system requirements with acceptable risk and within cost and schedule constraints, and that the program has established a sound business case for proceeding into the implementation phase. At this point, the NASA Agency Program Management Council will review the Constellation program and determine the program’s readiness to proceed into the implementation phase and begin detailed design. Separately, NASA provided technical comments, which have been addressed in the report, as appropriate. As agreed with your offices, unless you announce its contents earlier, we will not distribute this report further until 30 days from its date. At that time, we will send copies to NASA’s Administrator and interested congressional committees. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4841 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix III. To assess NASA’s progress toward establishing a sound business case for the Ares I and Orion projects and identify key technical challenges NASA faces in developing the Ares I Crew Launch and the Orion Crew Exploration Vehicles, we obtained and analyzed Constellation plans and schedules, risk mitigation information, and contract performance data relative to the standards in our knowledge-based acquisition practices including program and project plans, contracts, schedules, risk assessments, funding profile, budget documentation, earned value reports, and the results of NASA’s assessments of the program. We interviewed and received briefings from officials associated with the Constellation program office, including Exploration Systems Mission Directorate officials at NASA headquarters in Washington, D.C.; Orion project and Constellation program officials at the Johnson Space Center in Houston, Texas; and, Ares I and J-2X officials at the Marshall Space Flight Center in Huntsville, Alabama, regarding the program and projects’ risk areas and test strategy, technical challenges, the status of requirements, acquisition strategy and the status of awarded contracts. We also conducted interviews and received briefings from NASA contractors on heritage hardware and design changes, and top risks and testing strategy, for the J-2X engine, Ares I First Stage, Ares I Upper Stage, Launch Abort System, and Orion vehicle. We analyzed risk documented through the Constellation program’s Integrated Risk Management Application and followed up with project officials for clarification and updates to these risks. We also attended the Constellation Program’s Quarterly Risk Review at the Johnson Space Center. In addition, we interviewed Constellation program officials from Johnson Space Center about program risks, requirements, and the impact of budget reductions. We also spoke with NASA headquarters officials from the Exploration Systems Mission Directorate’s Resources Management Office in Washington, D.C., to gain insight into the program’s top risks and the basis for fiscal year 2006 through fiscal year 2010 budget requests as well as the funding strategy employed by the Constellation program. Furthermore, we reviewed NASA’s program and project management directives and systems engineering directives. Our review and analysis of these documents focused on requirements and goals set for spaceflight systems. We compared examples of the centers’ implementation of the directives and specific criteria included in these directives with our best practices work on system acquisition. We conducted this performance audit from December 2008 through August 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Jim Morrison, Assistant Director; Jessica M. Berkholtz; Greg Campbell; Jennifer K. Echard; Nathaniel J. Taylor; John S. Warren Jr.; and Alyssa Weir made key contributions to this report. NASA: Assessments of Selected Large-Scale Projects. GAO-09-306SP. Washington, D.C.: March 2, 2009. NASA: Agency Faces Challenges Defining Scope and Costs of Space Shuttle Transition and Retirement. GAO-08-1096. Washington, D.C.: September 30, 2008. NASA: Ares I and Orion Project Risks and Key Indicators to Measure Progress. GAO-08-186T. Washington, D.C.: April 3, 2008. NASA: Agency Has Taken Steps Toward Making Sound Investment Decisions for Ares I but Still Faces Challenging Knowledge Gaps. GAO-08-51. Washington, D.C.: October 31, 2007. NASA: Issues Surrounding the Transition from the Space Shuttle to the Next Generation of Human Space Flight Systems. GAO-07-595T. Washington, D.C.: March 28, 2007. NASA: Long-Term Commitment to and Investment in Space Exploration Program Requires More Knowledge. GAO-06-817R. Washington, D.C.: July 17, 2006. NASA: Implementing a Knowledge-Based Acquisition Framework Could Lead to Better Investment Decisions and Project Outcomes. GAO-06-218. Washington, D.C.: December 21, 2005. Defense Space Activities: Continuation of Evolved Expendable Launch Vehicle Program’s Progress to Date Subject to Some Uncertainty. GAO-04-778R. Washington, D.C.: June 24, 2004. Best Practices: Using a Knowledge-Based Approach to Improve Weapon Acquisition. GAO-04-386SP. Washington, D.C.: January 2004. | NASA's Constellation program is developing the Ares I Crew Launch Vehicle and the Orion Crew Exploration Vehicle as the agency's first major efforts in a plan to return to the moon and eventually send humans to Mars. GAO has issued a number of reports and testimonies on various aspects of this program, and made several recommendations. GAO was asked to assess NASA's progress in implementing GAO's recommendations for the Ares I and Orion projects, and identify risks the program faces. GAO analyzed NASA plans and schedules, risk mitigation information, and contract performance data relative to knowledge-based acquisition practices identified in prior GAO reports, and interviewed government officials and contractors. NASA is still struggling to develop a solid business case--including firm requirements, mature technologies, a knowledge-based acquisition strategy, a realistic cost estimate, and sufficient funding and time--needed to justify moving the Constellation program forward into the implementation phase. Gaps in the business case include significant technical and design challenges for the Orion and Ares I vehicles, such as limiting vibration during launch, eliminating the risk of hitting the launch tower during lift off, and reducing the mass of the Orion vehicle, represent considerable hurdles that must be overcome in order to meet safety and performance requirements; and a poorly phased funding plan that runs the risk of funding shortfalls in fiscal years 2009 through 2012, resulting in planned work not being completed to support schedules and milestones. This approach has limited NASA's ability to mitigate technical risks early in development and precludes the orderly ramp up of workforce and developmental activities. In response to these gaps, NASA delayed the date of its first crewed-flight and changed its acquisition strategy for the Orion project. NASA acknowledges that funding shortfalls reduce the agency's flexibility in resolving technical challenges. The program's risk management system warned of planned work not being completed to support schedules and milestones. Consequently, NASA is now focused on providing the capability to service the International Space Station and has deferred the capabilities needed for flights to the moon. Though these changes to the overarching requirements are likely to increase the confidence level associated with a March 2015 first crewed flight, these actions do not guarantee that the program will successfully meet that deadline. Nevertheless, NASA estimates that Ares I and Orion represent up to $49 billion of the over $97 billion estimated to be spent on the Constellation program through 2020. While the agency has already obligated more than $10 billion in contracts, at this point NASA does not know how much Ares I and Orion will ultimately cost, and will not know until technical and design challenges have been addressed. |
Aircraft noise standards establish the noise limits that civil subsonic jet aircraft are permitted to generate for takeoff, landing, and sideline measurements. These standards are based on an aircraft’s weight and number of engines. In general, they allow heavier aircraft and those with more engines to generate more noise than lighter aircraft and those with fewer engines. The noise generated by an aircraft generally correlates to the thrust powering the aircraft. The heavier the aircraft, the more thrust it needs. In the United States, the Federal Aviation Act of 1958, as amended in 1968, gives FAA the authority to regulate aircraft noise. (See app. I for a description of the development and implementation of U.S. aircraft noise standards.) Under that act, FAA issued regulations in 1969 that established noise standards for new designs of civil subsonic jet aircraft. In 1973, FAA amended its regulations to apply the noise standards to all newly manufactured aircraft, no matter when the aircraft were designed. In 1977, additional amendments established lower noise standards for all new aircraft, as well as the concept of “noise Stages.” Aircraft meeting the original 1969 standards were categorized as “Stage 2” aircraft; those meeting the more stringent 1977 standards, the current standards, were categorized as “Stage 3” aircraft; and aircraft meeting neither standard were categorized as “Stage 1” aircraft. In 1976, FAA prohibited all Stage 1 subsonic jet aircraft weighing more than 75,000 pounds from flying into or out of U.S. airports after January 1, 1985, unless the aircraft had been converted to meet the quieter noise standards. In 1990, ANCA required all existing civil subsonic jet aircraft weighing more than 75,000 pounds to comply with the current U.S. Stage 3 noise standards by December 31, 1999, or be retired from service. To meet this requirement, the engines on Stage 2 aircraft could be modified or replaced. The Stage 3 standards for takeoff, landing, and sideline measurements range from 89 to 106 decibels, depending on the aircraft’s weight and number of engines. FAA regulations governing the transition to meet Stage 3 noise standards went into effect on September 25, 1991, and offered two options for meeting the December 31, 1999, deadline. One option permitted a phased reduction in Stage 2 aircraft (phaseout), while the other called for a phased increase in the proportion of Stage 3 aircraft in the total fleet (phase-in). According to FAA, these options would result in significant cost savings for the industry while still preserving environmental gains. Although the greatest environmental gains would occur near the end of the phase-out period, FAA noted that both approaches offered steady progress throughout the decade toward an all-Stage-3 fleet. Phaseout of older, noisier Stage 1 and 2 aircraft was possible, in part, because the National Aeronautics and Space Administration, in cooperation with the aviation industry, developed new, quieter engines. The National Aeronautics and Space Administration, in cooperation with FAA and the aviation industry, is continuing to develop new technologies to reduce the impact of aircraft noise, although they have indicated that there are no significant breakthroughs in sight. FAA has several federal programs that address noise issues associated with civilian airports. Through one of these programs, FAA controls aircraft noise by regulating aircraft operations. FAA also administers two programs that fund noise mitigation projects. The Airport Improvement Program provides federal grants—funded by congressional appropriations from the Airport and Airway Trust Fund—for developing airport infrastructure, including projects that reduce airport-related noise or mitigate its effects. Grants are made using either funds subject to apportionment or discretionary funds. Funds subject to apportionment are distributed by a statutory formula to commercial service airports according to the number of passengers served and the volume of cargo moved, and to the states according to a percentage of the total amount of the appropriated funds. Discretionary funds are, for the most part, those funds remaining after funds subject to apportionment are allotted and certain other amounts are “set aside” for special categories, including noise-related projects. The Passenger Facility Charge program is a voluntary program that enables airports to impose a fee of up to $4.50 on each boarding passenger. The airports retain the money for airport infrastructure projects. Airports wishing to participate in the program must seek FAA’s approval both to levy the fee and to use the revenues for particular development projects. Both programs include noise reduction projects such as soundproofing buildings (including homes and schools) and land acquisition, which includes acquiring homes and relocating the people displaced to quieter communities. ANCA required FAA to establish regulations on airport noise and access restrictions on the operations of Stage 2 and Stage 3 aircraft. Existing access restrictions were grandfathered, permitting them to remain in effect. To restrict the access of Stage 2 aircraft, an airport has to publish the proposed restrictions at least 180 days before they go into effect. The airport is also required to publish other information with the restrictions such as cost-benefit analyses of the proposed restrictions and any alternatives considered. If the restrictions are to apply to Stage 3 aircraft, they must be approved by FAA or agreed to by the airport and all the aircraft operators at an airport. To approve restrictions, FAA must find that the proposed restrictions (1) are reasonable, nonarbitrary, and nondiscriminatory; (2) do not create an undue burden on interstate or foreign commerce; (3) are not inconsistent with maintaining the safe and efficient utilization of the navigable airspace; (4) do not conflict with any existing federal statute or regulations; (5) have been adequately provided to the public for comment; and (6) do not create an undue burden on the national aviation system. The primary responsibility for integrating airport considerations into local land-use planning rests with local governments—presenting a difficult problem for many airports, because they often do not have control over development in surrounding communities. However, airports are held accountable by these communities when aircraft noise adversely affects uses such as schools and residences built close to airports. FAA set the standards that airports use to measure the level of noise to which communities around airports are exposed over time and had issued guidelines that identify land uses that would and would not be compatible with the noise generated by a nearby airport’s operations. ICAO is the international body charged with ensuring the safe and orderly growth of international civil aviation throughout the world. One of ICAO’s functions is to set international noise standards for aircraft. The primary purpose of establishing noise standards is to reduce aircraft noise. This noise reduction, when combined with other noise reduction measures, can reduce the number of people exposed to significant levels of aircraft noise. Any new standard must receive the approval of two-thirds of the members of ICAO’s Council, one of whom is the United States, and the standard becomes effective unless it is then disapproved by a majority of ICAO’s members through the Assembly. (See app. II for a description of the development of international aircraft noise standards.) Member nations then implement the new standards through their own political and legal processes. International recognition of aircraft noise standards is a cornerstone of the international system of air travel, enabling airlines to plan and operate their fleets more efficiently than if there were a patchwork of national noise standards or operating restrictions. In January 2001, ICAO’s Committee on Aviation Environmental Protection (CAEP), a technical body that recommends international aircraft noise standards for the organization, endorsed a balanced approach to noise management that included such things as the reduction of noise from aircraft, improved land-use planning and control around airports, and the use of aircraft noise abatement procedures and aircraft operating restrictions. To reduce aircraft noise, CAEP recommended, and the Council adopted, new Chapter 4 noise standards that are 10 decibels lower, on a cumulative basis, than the Chapter 3 standards. The standards will apply to new designs submitted on or after January 1, 2006. On the basis of a cost-benefit analysis, CAEP recommended that there be no global phaseout of aircraft meeting Chapter 3 noise standards. CAEP considered the question of operating restrictions on Chapter 3 aircraft but reached no final conclusion. ICAO’s members are expected to make a final decision on these issues when the Assembly meets from September 25 to October 5, 2001. FAA is the official U.S. representative to CAEP. Representatives from the State Department, the Environmental Protection Agency, the U.S. aviation industry, and environmental groups also participate in CAEP’s work. The mandated transition to quieter aircraft was expected to reduce the number of people exposed to noise levels that FAA considers incompatible with residential living, to facilitate needed airport expansion, and to enable airlines to embark on long-term planning for investing in and operating their fleets. Expectations concerning how the airlines would comply with the mandated transition, and what that transition might cost the airlines, varied. The mandated transition to quieter aircraft was expected to reduce the overall levels of noise to which nearby communities were exposed, thereby reducing the annoyance caused by airport-generated noise and improving the quality of life for people living in those communities. Communities near airports are exposed to noise directly attributable to airport operations—primarily from aircraft taking off and landing. The impact of such noise on communities is usually analyzed in terms of the extent to which the noise annoys people by interfering with their normal activities, such as sleep, relaxation, speech, television, school, and business operations. According to FAA’s final rulemaking implementing the transition, the number of people living in areas exposed to noise levels that were incompatible with residential living was expected to fall from about 2.7 million in 1990 to about 400,000 in 2000, when the mandated transition to quieter aircraft was complete. Less noise from airport operations was expected to reduce community opposition to airport expansion. ANCA, in particular, acknowledged that aviation noise was linked to airport expansion and community opposition to that expansion. The findings of ANCA state that aviation noise management is crucial to the continued increase in airport capacity and that community noise concerns can be alleviated, in part, through the use of quieter aircraft and revenues for noise management. At the time the transition was mandated, aircraft noise was a major impediment to increasing airport capacity, particularly if the increase was to be provided by constructing new runways. New capacity was needed at the time because the demand for air travel was causing increasing delays—in 1988, 21 airports experienced more than 20,000 hours of delays. Airports were thus expected to benefit from the transition to quieter aircraft by being able to plan for growth and develop the capacity needed to meet the rising demand for air travel. The lower noise levels were also expected to reduce the airports’ need for federal investments in noise abatement programs. ANCA’s passage was also expected to provide a stable environment that would enable the airlines to develop long-term business plans for their fleets. Ongoing uncertainty about whether existing aircraft would be required to comply with Stage 3 noise standards and the promulgation of a plethora of airport access restrictions had been impeding the airlines’ development of long-term investment and operating plans. By 1990, many communities had established restrictions on the use of their airports— such as limits on the number of Stage 2 aircraft that could land—to reduce the amount of noise the airports were generating. Additionally, before ANCA’s passage, many airports were planning to adopt use restrictions in the absence of a federally mandated phaseout of Stage 2 aircraft. The airlines believed that a resulting “patchwork quilt” of restrictions would likely produce a de facto phaseout of Stage 2 aircraft by 2000. ANCA settled both of these issues in 1990 by mandating that heavier aircraft meet Stage 3 standards by December 31, 1999, and by establishing an FAA review process that airports had to follow if they wanted to adopt new noise or access restrictions. With these decisions made, the airlines expected to be able to develop long-term fleet plans that could include operating Stage-3-compliant aircraft for their useful lives. At the time of ANCA’s passage, there were varying assumptions as to how the airlines would comply with the transition. Some in the aviation community thought the airlines would comply with the transition largely by purchasing new aircraft rather than converting existing aircraft to meet Stage 3 noise standards. Conversion could be achieved by replacing an aircraft’s engines or by installing a noise reduction technology known as a “hushkit.” Because new aircraft were generally quieter than aircraft with hushkits, replacing aircraft was expected to provide a greater reduction in aircraft noise. Some anticipated that aircraft replacement would be the primary means for complying with Stage 3 standards because of high fuel prices at the time the law was passed; new Stage 3 aircraft were generally more fuel efficient than existing Stage 2 aircraft. Others in Congress and the aviation community, however, noted that hushkitting was as likely an expectation for compliance as aircraft replacement. At the time the transition was mandated, estimates of the airlines’ cost to comply with the transition ranged from $17 million to $175 billion. The wide variation depended largely on whether an analysis assumed modification, full cost for replacement, and/or fleet growth. In 1991, we reported on the assumptions and methodologies of four major studies.Two of the studies were limited to a single segment of the aviation community—one to major passenger airlines and the other to freight aircraft. A third study used the purchase price of an aircraft as the cost of meeting Stage 3 noise standards—a cost we considered excessive. A fourth study, the one offered by FAA, was more comprehensive— including the domestic jet fleet for both passenger and cargo air traffic— and incorporated generally reasonable assumptions in its methodology. Using FAA’s methodology as a base and making certain changes in the assumptions, such as the discount rate used to compute future expenditures and costs savings, we estimated at the time that complying with the Stage 3 noise standards would cost the airlines from $2.1 to $4.6 billion in 1990 dollars. Our low estimate assumed all aircraft owners would adopt the least expensive approach to compliance for each aircraft, whereas our high estimate assumed premature replacement of all aircraft. The results anticipated from the transition to meet Stage 3 noise standards were partially realized. The transition to quieter aircraft worked smoothly and was achieved within the required time frame. Also, FAA estimates that the transition to aircraft compliant with Stage 3 noise standards considerably reduced the population exposed to levels of noise from airport operations that FAA considers incompatible with residential living. Nevertheless, community opposition remains the primary impediment to airport expansion, and concern about noise is the reason most frequently cited as the basis for such opposition. Despite the significant decrease in the population exposed to incompatible noise, the demand continues for federally authorized support for noise mitigation efforts that are provided through a federal grant program and a federally authorized passenger boarding fee. Furthermore, while the adoption of new airport noise and access restrictions has been limited since the law was passed, the airlines’ long-term plans for their fleets may nevertheless be jeopardized by challenges to the continued use of older Stage 3 aircraft that are noisier than those newly manufactured. We currently estimate that the airlines’ costs directly attributable to complying with the transition to quieter aircraft noise standards (i.e., the cost of hushkitting or the incremental cost of financing a new aircraft early, whichever was lower) ranged from $3.8 billion to $4.9 billion in 2000 dollars. According to FAA, expectations for the reduction in the number of people living in areas incompatible with airport-generated noise levels have essentially been met. FAA estimates that in 2000 there were about 440,000 people living in areas exposed to incompatible noise levels, only a slightly higher number than FAA originally estimated, and a considerable reduction from FAA’s 1990 estimate of 2.7 million. FAA’s current population exposure estimates are based on the use of what FAA and ICAO consider to be a substantially credible model that is used to project the number of people exposed to various airport-generated noise levels— the Model for Assessing Global Exposure to the Noise of Transport Aircraft (MAGENTA). We discussed the MAGENTA model with FAA to assure ourselves that the model’s estimate of the number of people living in areas exposed to incompatible noise levels was reliable. According to a FAA official, the model was extensively reviewed and vetted through ICAO’s MAGENTA Working Group. This official also said that it is the only model that is available to do this type of estimate. The development and testing phases of the model were completed last year and used by ICAO’s environmental technical experts to evaluate various noise issues. FAA’s estimates using MAGENTA were based on the best available data, which FAA is currently updating. FAA recently updated the U.S. version of MAGENTA with new airport operational data and 2000 census data. The net effect of this update is a new estimate of 440,000 people exposed in 2000 instead of 448,000 as estimated earlier. FAA is also updating two other data inputs to further improve the accuracy of the estimate. These data inputs are the type of aircraft using each airport and new runways or runway extensions added since the mid-1990s. While the updated data may produce some changes in the estimated number of persons exposed to incompatible levels of noise, FAA and others believe these changes are not likely to be significant. Although it is unclear whether community annoyance declined with lower noise levels, opposition to airport expansion continues. In our 1999-2000 survey of the 50 busiest commercial passenger airports, noise issues were identified as the primary environmental concern and challenge for airports. We found that although airports had implemented various measures to reduce the impact of aircraft noise, community concerns persisted. While the extent to which areas around airports have been built up since the transition to an all-Stage-3 fleet is not known, strong pressure exists to develop residential areas around heavily used airports, particularly in metropolitan areas with more than 50,000 people. Our 1999-2000 survey found that officials from 13 of the nation’s 50 busiest commercial service airports view increases in the residential population near their airport as a major concern. Thirty-five of the airports reported that over half of the noise complaints in the preceding year had come from persons living in areas whose noise levels FAA considers compatible with residential development. According to an October 2000 report by the Airports Council International-North America, an association representing airports, noise remains the single biggest impediment to increasing airport capacity across the country. More recently, FAA found that public opposition to airport expansion continues to rise, with noise cited as the primary reason. The reduction in the population exposed to incompatible noise levels, as defined by FAA, has also not led to a decrease in the demand for federally authorized funding for noise projects. As figure 1 shows, the demand for funds for noise abatement continued throughout the decade, albeit at varying levels from year to year. ANCA did limit the implementation of new airport noise and access restrictions. According to FAA, since ANCA’s passage in 1990, no formal proposals for new Stage 3 restrictions have been completed under ANCA’s implementing regulations. FAA has been asked to review draft analyses of proposed restrictions at (1) Pease Airport in New Hampshire to restrict the nighttime scheduling of Stage 3 aircraft, (2) Burbank Airport in California to implement a nighttime curfew affecting all aircraft operating at the airport, and (3) Kahului Airport in Hawaii to phase out Stage 2 aircraft. FAA is currently reviewing a proposed restriction by the Naples Municipal Airport in Florida to ban Stage 2 aircraft that weigh less than 75,000 pounds. In addition, two new proposed restrictions on Stage 2 aircraft were withdrawn. The airlines have met the deadline for completing their transition to meet Stage 3 noise standards. According to the draft 1999 Progress Report on the Transition to Quieter Airplanes, FAA is satisfied that all known affected aircraft operators are in compliance with the December 31, 1999, statutory requirements. By the end of 1999, the 221 active operators’ fleets included only Stage-3-compliant aircraft. Despite full compliance with the transition to Stage 3, the airlines’ long- term fleet plans may now be in jeopardy. Some in the aviation community have called for the retirement of aircraft that are within 5 decibels of Stage 3 standards, many of which are hushkitted, even though the aircraft meet Stage 3 standards. The Airports Council International-North America reports that the noise levels produced by hushkitted aircraft meet the Stage 3 standard or are 1 to 5 decibels quieter than it, while newly manufactured Stage 3 aircraft are as much as 10 to more than 20 decibels quieter than the standard. As a result, Airports Council officials noted that while noise levels declined following the transition, they did not decline as much as they would have if aircraft had been replaced rather than converted. Noise is still a problem in part because of these older, noisier aircraft. Therefore, the Airports Council and some individual airports are recommending retiring aircraft within 5 decibels of Stage 3 limits. Airline representatives have noted that there have been an increasing number of requests for “voluntary” phaseouts of hushkitted aircraft at individual airports, along with operating procedures or runway use restrictions that target hushkitted aircraft. According to these representatives, this is a major concern for commercial passenger airlines because they developed their Stage 3 compliance strategies and long-term fleet plans with the expectation that those aircraft would be available for their useful lives; therefore, any premature retirement of hushkitted aircraft would have a further economic impact on the industry. Additionally, a cargo industry representative noted that cargo airlines are currently dependent on older aircraft that are hushkitted to stay in business. More recently, two estimates of the cost of complying with the mandated transition to Stage 3 noise standards have been completed. In 1999, the Air Transport Association, an association representing major U.S. commercial airlines, commissioned an analysis of the airlines’ costs for complying with the mandated transition. That analysis concluded that these costs were about $32 billion in 1999 dollars, not including the cost of fleet growth. Another estimate by a major aircraft engine manufacturer placed the costs to airlines at about $15.5 billion. Both of these cost analyses, however, included the full cost of aircraft purchased to replace older Stage 2 aircraft. We believe that including the full replacement cost of an aircraft exceeds the cost directly attributable to compliance with the mandated transition. An airline representative noted that some carriers chose to incur the additional cost of replacing their aircraft, in part, to respond to their customers’ environmental concerns. On the basis of a model we developed, we estimated that the airlines’ costs directly attributable to the mandated transition ranged from a low of $3.8 billion to a high of $4.9 billion in 2000 dollars. We determined that the appropriate cost that could be attributed to compliance with the noise standards was the cost for the conversion of an aircraft—that is, by hushkitting the engines--or the incremental capital cost of financing the early replacement of an aircraft, whichever cost was lower. This estimate is based on 2,372 Stage 2 aircraft over 75,000 pounds in the U.S. fleet on November 5, 1990, the date ANCA became law. We applied the actual hushkit cost, or range of costs, for a particular model of aircraft and the cost of installing the hushkit. (See app. IV for a more detailed discussion of our cost methodology.) We adopted this approach as the way to reflect only the cost of compliance, although many carriers opted to exceed FAA’s requirement and incurred significant additional costs in so doing.Since hushkitting was expected and proved to be available for almost all types of aircraft, when the airlines chose more costly methods to achieve compliance—such as replacing the engines or purchasing new aircraft— we attributed that choice to other economic reasons or benefits, such as improved fuel efficiency, lower maintenance costs, and tax advantages. The transition to quieter aircraft worked smoothly, was achieved within the required time frame, and was successful in reducing the number of residents living in areas FAA considered incompatible for residential use. However, concerns about aircraft noise continue to be a constraint on future airport expansion. Also, FAA and other officials are concerned that as flights increase to meet the expected growth in travel, the population exposed to incompatible noise levels may rise again around some airports. Thus some of the gains obtained by the transition to quieter aircraft may be eliminated. Our review of the results of the transition, however, especially compared with the expectations, raises two key issues: (1) Why does concern about noise continue to generate substantial opposition to airport operations and expansion after such a major decline in the number of people living in areas exposed to incompatible levels of noise? and (2) As noise levels decrease, how can local governments be encouraged to take responsibility for minimizing the exposure of residents to noise by preventing new residential development from encroaching on airports, when such areas may later become incompatible as airport operations and noise increase? Table 1 discusses these issues and identifies some specific questions for the aviation community to explore to address these issues. We provided the Department of Transportation, the Environmental Protection Agency, the National Aeronautics and Space Administration, the Airports Council International-North America, the Air Transport Association of America, and the American Association of Airport Executives with copies of a draft of this report for their review and comment. The Environmental Protection Agency, the National Aeronautics and Space Administration, and the American Association of Airport Executives provided no comments. We received oral comments from the Department of Transportation, specifically from FAA’s Office of Environment and Energy.These officials generally agreed with the facts in the report. They provided updated information on their MAGENTA model, which was used to estimate the number of people exposed to noise levels that FAA considers incompatible with airport operations. In the draft report, we noted that some of the data used in the model were not the most current and that FAA's estimates of the number of people exposed to incompatible noise levels may be affected by this limitation. FAA officials provided us with updated information on the population exposed to incompatible noise levels. They noted that two of four key data inputs to the model have been updated and that FAA is updating the other two. FAA agreed that the updated data would improve the accuracy of the data. We revised the report to reflect this information. FAA officials also provided us with technical comments, which we incorporated as appropriate. The Airports Council International-North America provided oral comments. The Senior Vice President of Technical and Environmental Affairs complimented our staff on expertly capturing the complex issues raised by the subject. He clarified the Airports Council’s position with respect to phasing out older, noisier aircraft. The draft report stated that the organization had recommended phasing out the operation of aircraft whose engines were technologically converted, or hushkitted, to comply with current standards. The Airports Council has called for retiring aircraft that are within 5 decibels of the Stage 3 standard rather than retiring an aircraft based on a design feature such as a hushkit. On a related note, the Airports Council believes that part of the reason for the continuing concern about noise is that these older aircraft are generally noisier and a significant number of them are still in operation. We revised the report to clarify their position on this subject. The Airports Council provided other technical comments, which we incorporated as appropriate. The Air Transport Association also provided oral comments. The Assistant General Counsel noted that we did a good job of capturing the important factors that went into the transition from Stage 2 to Stage 3. However, the Association cautioned that not all of the results would be directly applicable as the industry transitions from Stage 3 to Stage 4. In particular, the Association noted that the technological and economic circumstances are much different now than they were back in 1990, when the Congress mandated the transition to Stage 3. Although the report states that our objective is to provide a retrospective analysis of the transition to Stage 3, we agree that the transition to Stage 4 needs to be viewed apart from the transition to Stage 3. We revised the report to clarify this point. The Association also made other technical comments, which we incorporated as appropriate. We conducted our review from January 2001 through August 2001 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly release its contents earlier, we plan no further distribution of this report until 14 days after the date of this letter. At that time, we will send copies of the report to the appropriate congressional committees; the Secretary of Transportation; the Administrator, FAA; the Administrator, Environmental Protection Agency; and the Administrator, National Aeronautics and Space Administration. We will also make copies available to other interested parties upon request. Please call me at (202)-512-2834 if you have any questions about this report. Key contributors to this report are listed in appendix V. In the United States, responsibility for aircraft noise standards resides with the Federal Aviation Administration (FAA). The Federal Aviation Act of 1958, as amended in 1968, gave FAA the authority to regulate aircraft noise through the aircraft type certification process. FAA can implement new aircraft noise standards through the standard federal rulemaking process. Under that process, FAA must consult with the Environmental Protection Agency, but the final decision lies with FAA. However, the Environmental Protection Agency may also initiate new aircraft standards by submitting proposed regulations to FAA, which FAA is required to initiate through the federal rulemaking process. As part of the federal rulemaking process, FAA must consider whether a proposed standard is economically reasonable, technologically practicable, and consistent with the highest degree of safety in air transportation or commerce. Under the Federal Aviation Act, as amended, FAA issued regulations in 1969 that established noise standards for new designs of civil subsonic jet aircraft. Initially, these regulations prescribed noise standards that applied only to new types or designs of aircraft. In 1973, FAA amended its regulations to apply the noise standards to all newly manufactured aircraft, whether or not the aircraft design was new. In 1976, FAA prohibited any subsonic jet aircraft weighing over 75,000 pounds from flying into or out of U. S. airports after January 1, 1985, unless their engines had been modified or replaced to meet the new standards. In 1977, additional amendments to the regulations established more stringent noise standards for all new aircraft, as well as the concept of noise “Stages.” Aircraft meeting the original 1969 standards were categorized as “Stage 2” aircraft; those meeting the more stringent 1977 standards were categorized as “Stage 3” aircraft; and aircraft meeting neither set of standards were categorized as “Stage 1” aircraft. Under the Airport Noise and Capacity Act (ANCA) of 1990, civil subsonic jet aircraft weighing more than 75,000 pounds that did not meet the Stage 3 standards were required to comply with these standards by December 31, 1999, or be retired from service in the United States. Regulations implementing the transition went into effect on September 25, 1991. The regulations provided two options for the transition, which are described in table 2. FAA stated that this combination of methods would result in significant cost savings for the industry while still preserving environmental gains. Since the greatest environmental gains would occur near the end of the phase-out period, according to FAA, there was no ultimate difference in the two approaches. FAA expected both approaches to achieve steady progress toward an all-Stage-3 fleet throughout the decade. Each domestic and foreign aircraft operator of large civil subsonic jet aircraft in the United States was required to submit an annual report on its progress toward compliance with the phased elimination of Stage 2 aircraft weighing over 75,000 pounds. Each report was required to contain information on the operator’s fleet composition. Domestic carriers were required to provide initial compliance plans in 1992, followed by annual updates. Each airline had to provide FAA with the following information annually: (1) any Stage 2 aircraft added to its fleet; (2) any Stage 2 aircraft removed from U.S. operations and either transferred to another recipient or retired, destroyed, or put into storage; (3) any Stage 2 aircraft returned to or imported from a foreign source; (4) any Stage 2 aircraft modified to meet Stage 3 noise standards; (5) all Stage 3 aircraft meeting U.S. operations requirements; and (6) the date for achieving full compliance with Stage 3 noise standards. According to an FAA official, FAA monitored each aircraft operator’s progress toward meeting the statutory compliance date of December 31, 1999. The agency also monitored domestic operators’ progress in meeting their compliance plans through direct communications and provided for contact with foreign operators and foreign civil aviation officials to ensure that they were aware of and prepared to meet the statutory compliance deadline. FAA reviewed all annual reports to ensure accuracy and completeness and followed up by contacting operators when necessary. Compliance monitoring was an ongoing effort with the goal, according to an FAA official, of monitoring and reminding operators about the statutory compliance deadline. FAA is satisfied that all known affected operators are in compliance with the December 31, 1999, statutory requirements. The ANCA statute allowed a domestic carrier to apply for a limited waiver that would extend the date by which compliance was required. To be eligible for consideration, a petitioner was required to have a fleet mix of 85 percent Stage 3 aircraft by July 1, 1999, and show, among other criteria, that a waiver would be in the public interest. A petitioner was also required to show that a good faith effort had been made to comply. A plan, providing for compliance by December 31, 2003, was required. FAA received 10 petitions for waivers from the Stage 3 transition rule. No waivers were granted. One petitioner requested that it be allowed to operate Stage 2 airplanes after December 31, 1999; that petition was denied. The other nine petitioners requested permission to operate nonrevenue flights for purposes of Stage 3 modifications, storage, maintenance, and/or exportation. FAA notified these petitioners that it did not have the authority to authorize such operations under the provisions of the law. For a limited time, foreign carriers were also allowed to apply for a waiver from the final compliance deadline for transition to Stage 3 noise standards, but according to an FAA official, FAA received no requests for such waivers. In November 1999, the Congress amended ANCA to allow the operation of Stage 2 aircraft in nonrevenue service after December 31, 1999, under specific conditions. FAA chose to implement the provision by issuing special flight authorizations. An operator of a Stage 2 airplane that wanted to operate in the contiguous United States for any of the purposes listed in the revised statute had to apply in advance. Applications are due 30 days in advance of the planned flight and must provide the information necessary for FAA to determine that the planned flight is within the limits prescribed by law. Figures 2 through 4 show the Stage 3 aircraft noise standards and the increases in noise allowed as aircraft weight increases. As figure 2 illustrates, the noise standards for takeoff operations also vary with the number of engines. The International Civil Aviation Organization (ICAO) develops international noise standards to provide consistent aircraft noise standards across nations. ICAO, the international body charged with ensuring the safe and orderly growth of international civil aviation throughout the world, operates under the Convention on International Civil Aviation, ratified in 1947. Although not a regulatory body, ICAO promulgates standards and recommends practices for international civil aviation. According to the terms of the Convention, ICAO makes its decisions through an Assembly and a Council with various subordinate committees, commissions, and panels, including the Committee on Aviation Environmental Protection (CAEP), which conducts most of ICAO’s technical environmental work. CAEP does its technical work through various working groups relying on the participation and technical expertise of its member countries. The Assembly, composed of representatives from ICAO’s 187 member countries, is ICAO’s ultimate decisionmaking body. It meets at least once every 3 years to review ongoing work and set policy for the coming years. Each member country is entitled to one vote, and decisions of the Assembly are taken by a majority of the votes cast except when otherwise provided in the Convention. According to FAA, in practice, most Assembly decisions are made by consensus. The Council, composed of representatives from 33 countries, is elected by the Assembly for a 3-year term. The Assembly chooses the members of the Council with representation from three categories: (1) major air transport countries, (2) countries making the largest contribution to the provision of air navigation facilities for international civil air navigation, and (3) countries from major areas of the world not represented by members selected in the first two categories. Member nations are selected to represent only one of these three categories. The Council is the governing body that provides continuing direction to the organization’s activities, and it is responsible for adopting standards and recommended practices that govern all aspects of international civil aviation, ranging from safety and security to the noise and environmental aspects of aircraft operations. Proposed new aviation standards submitted to the Council require a two- thirds majority vote for adoption. After adoption, the standards are submitted to the member countries. The new standards become effective unless a majority of the member countries disapprove them through the Assembly. According to FAA, the Council also reaches most decisions through consensus. If a government organization within a member country, like FAA, certifies that an aircraft meets ICAO’s standards, then all ICAO member countries must recognize that certification as valid. An ICAO member that does not adopt ICAO’s standards must provide a written explanation to ICAO. If an ICAO member files such an explanation, other ICAO members are absolved from their obligation to recognize that country’s certification of aircraft—they do not have to allow such aircraft into their country. Furthermore, if a member country fails to file a written notification, it will be in default of its obligation, and will risk the exclusion of its aircraft from travel in other ICAO member countries and the loss of its voting power in the Assembly and Council. The Council accomplishes its work through committees and commissions that provide technical expertise for the review of issues the Council considers. CAEP conducts most of ICAO’s environmental work, from reviewing aircraft noise issues and developing aircraft noise standards and recommended practices to recommending actions for the Council’s adoption. CAEP is responsible for striking a balance among conflicting objectives for aircraft technical specifications, since every change made to an aircraft or its engines can affect its safety performance, emissions performance, noise level, and fuel efficiency. CAEP is the only committee that reports directly to the Council, unlike other ICAO technical groups, which report through either the Air Navigation Commission or the Air Transport Committee. CAEP’s membership is established by the Council and specific members are nominated by member nations and international observer organizations. CAEP is currently composed of experts from 19 ICAO member countries and observers from 12 organizations representing all major sectors of the aviation industry, including airports, airlines, aircraft manufacturers, environmental organizations, and two countries (Norway and Greece). The U.S. government, industry, and environmental representatives participate in or observe CAEP. ICAO develops aviation standards, including aircraft noise standards, through the amendment of annexes to the Convention on International Civil Aviation. The main parts of each annex are the international standards and recommended practices. A standard is defined as a specification, the uniform application of which is necessary for the safety or regularity of international civil air navigation. A recommended practice, on the other hand, is defined as a specification, the uniform application of which is desirable in the interest of safety, regularity, or the efficiency of international civil aviation. As figure 5 shows, proposals to amend or add either new standards or recommended practices may come from any ICAO members, observers, committees, commissions, panels, or other ICAO units. The Council establishes CAEP’s work program and must approve the initiation of any work to amend or add new environmental standards. According to the Council’s mandate, CAEP imposes conditions for adopting environmental standards. The proposed standard must be economically reasonable, technologically feasible, and environmentally beneficial. Proposed new standards recommended for adoption by CAEP are submitted to the Council, where a two-thirds majority vote is required for adoption. Depending on the issue, the Council refers CAEP’s recommendations to either the Air Navigation Commission, the Air Transport Committee, or other appropriate body for review before acting on the recommendations. Technical standards are adopted by the Council unless a majority of ICAO members disapprove them. Policy issues are usually forwarded by the Council to the Assembly for resolution, where a majority vote is required for final action. If any member nation finds it impossible to comply, the country is required to notify ICAO of any differences that will exist at the time the standards or practices take effect. ICAO then publishes those notifications of differences in supplements to the annexes. For policy issues covered by an Assembly resolution, there is no requirement to file a difference if the nation chooses not to comply with any or all of the provisions. International aircraft noise standards adopted by ICAO are published as Chapters in Volume I of Annex 16 to the Convention on International Civil Aviation. Chapter 2 of Annex 16, Volume I, contains the aircraft noise standards that apply to jet aircraft designed prior to October 1977. Chapter 3 contains more stringent noise standards that apply to aircraft designed after that date. Chapter 4 contains ICAO’s new noise standards, adopted in June 2001. The primary purpose of establishing noise standards is to reduce aircraft noise. This noise reduction, when combined with other measures, is intended to reduce the number of people exposed to significant levels of aircraft noise. On January 17, 2001, CAEP recommended the adoption of new aircraft noise standards. The new standards, incorporated into a Chapter 4 of Annex 16, Volume 1, are 10 decibels quieter than the Chapter 3 standards, on a cumulative basis, from aircraft noise measurements at takeoff, approach, and sideline. The Chapter 4 standards apply to new aircraft designed after January 1, 2006. The new standards do not apply to the current fleet or to current designs in production. On June 27, 2001, the ICAO Council unanimously approved the adoption of the new Chapter 4. CAEP also recommended procedures for recertifying existing aircraft to meet the new standards. According to FAA, based on the cost and environmental impact information reviewed by CAEP, there was unanimous agreement within CAEP that there should be no global phaseout of existing aircraft. The committee remained divided, however, over whether or not to recommend a regional phaseout of existing aircraft. The Assembly will consider this issue at its meeting from September 25 to October 5, 2001. CAEP also endorsed a balanced approach to noise management, which is an airport-by-airport approach to managing noise using all available measures–aircraft noise reduction, land use planning and noise mitigation measures, noise abatement operational procedures, and operating restrictions on aircraft–to address specific noise problems in a very targeted way. Because the United States is moving to a new, more stringent noise standard, we were asked to provide a retrospective analysis of the transition to current aircraft noise standards, including a discussion of expectations, results, and issues raised by the transition. To identify expectations and results and to discuss issues raised by the transition of existing aircraft to the current U.S. noise standards, known as “Stage 3,” we (1) reviewed the legislative history of the Airport Noise and Capacity Act of 1990; (2) conducted interviews and gathered information from the following agencies and organizations: FAA, the Environmental Protection Agency, the National Aeronautics and Space Administration, the International Civil Aviation Organization, the Aerospace Industries Association, the Airports Council International-North America, the Air Transport Association of America, Inc. (hereafter referred to as the Air Transport Association), the American Association of Airport Executives, the Cargo Airline Association, the General Aviation Manufacturers Association, the National Association of State Aviation Officials, the National Business Aviation Association, the Natural Resources Defense Council, Pratt & Whitney, and the Regional Airline Association; (3) conducted a literature search through the Internet and Lexis-Nexis and reviewed key documents; (4) discussed the Model for Assessing Global Exposure to the Noise of Transport Aircraft with FAA to assess the reliability of the model’s estimate of the number of people living in areas exposed to incompatible noise levels; (5) developed our own model for estimating the costs to airlines of moving to the current aircraft noise standards; and (6) compared results with expectations and analyzed the results to identify issues raised by the transition. We provided a written summary of our findings to the organizations listed under (2) above for their review and comment before completing the final draft report. We identified two recent estimates—one by the Air Transport Associationand one by Pratt & Whitney—of the costs to airlines to comply with current Stage 3 aircraft noise standards. Because both of these estimates attributed the full cost of new replacement aircraft to the noise requirements, we developed an estimate that focuses on the costs of the transition that are directly attributable to compliance with the noise standards (i.e., the cost of hushkitting or the incremental cost of financing a new aircraft early, whichever was lower). We estimated that the cost to comply with Stage 3 noise standards ranged from $3.8 billion to $4.9 billion in 2000 dollars. In 1999, the Air Transport Association commissioned the Campbell-Hill Aviation Group, a consulting firm, to estimate the airlines’ costs to transition to the current Stage 3 aircraft noise standards. Campbell-Hill estimated that the costs attributable to compliance were about $32 billion in 1999 dollars, not including fleet growth. This estimate covers the cost of replacing Stage 2 aircraft (including both the interest expense associated with the acquisition of replacement aircraft and the depreciation for new aircraft acquired prior to the end of the useful lives of the aircraft they replaced) and the cost of hushkitting or reengining Stage 2 aircraft to meet Stage 3 standards. Air Transport Association officials said that some airlines, depending on their fleet composition, were faced with significant commercial risk in deciding how to comply with ANCA if they chose to wait for the development and certification of the now retrospectively less expensive hushkits. These kits are now readily available, but the airlines did not have the advantage of a perfect forecast of the availability of hushkit solutions for their aircraft affected by the phaseout. In cases where aircraft replacement actually was chosen during the phase-out period, the Campbell-Hill analysis gives the airlines credit for the entire replacement cost of a new aircraft because of this commercial risk, even if hushkitting would have been an option. Pratt & Whitney also estimated the cost to the airlines of making their fleets compliant with the Stage 3 noise standards—about $15.5 billion in 1999 dollars. About $4 billion of this estimate was attributed to the cost of converting existing aircraft to meet the standard. The remaining $11.5 billion was the estimated cost to purchase replacement aircraft to comply with Stage 3 noise standards. The estimate includes the full purchase price of the new aircraft—an average price of $40 million each for 287 narrow- body jets designated as being replaced because of the phaseout of Stage 2 aircraft. This represents one-third of the total number of narrow-body jets that were replaced between 1990 and 1999. We developed our own estimate of the airlines’ transition costs directly attributable to compliance with the Stage 3 noise standards. We estimated that the appropriate cost directly attributable to requirements to comply with the Stage 3 noise standards ranged from $3.8 billion to $4.9 billion in 2000 dollars. We determined that the appropriate cost that could be attributed to compliance with the Stage 3 noise standards was either the cost for the conversion of an aircraft—the cost to retrofit an aircraft with a hushkit--or the incremental capital cost to finance the early purchase of a replacement aircraft, whichever cost was lower. Since hushkitting was expected and proved to be available for almost all types of aircraft, when the airlines chose more costly methods to achieve compliance—such as replacing the engines or purchasing new aircraft— we attributed that choice to other economic reasons or benefits, such as improved fuel efficiency, lower maintenance costs, and tax advantages. For example, changing an aircraft’s engines instead of hushkitting them would provide added fuel benefits not available simply by hushkitting the engines. To develop our estimate, we purchased data from AvSoft Limited, which provided the list of Stage 2 aircraft over 75,000 pounds in the U.S. fleet on November 5, 1990, the day that ANCA was passed. Using the 1990 database, we identified 2,372 Stage 2 aircraft in the U.S. fleet as of that date. Matching the AvSoft database to 2001 FAA data, we were able to directly identify 1,051 aircraft as being hushkitted (or reengined) and still in the fleet. For 689 of these 1,051 aircraft, FAA data indicated the exact hushkit used on the aircraft by Supplemental Type Certificate code. The FAA data indicated that another 362 of these 1,051 aircraft had been hushkitted or reengined but did not identify the exact hushkit used or indicate whether the aircraft had been reengined. For these 362 aircraft, since we did not know the exact hushkit used, we used the average of the cost for all the hushkits available for that model aircraft. In addition, we found that another 272 aircraft were Stage 3, so most likely had been hushkitted or reengined as well, although there was no direct match (the average cost of all hushkits available was also used for these 272 aircraft). Thus, we determined that at least 1,323 aircraft were modified (primarily hushkitted) to meet Stage 3 standards. We developed cost estimates for the remaining 1,049 aircraft. We found that 386 aircraft were beyond retirement age on December 31, 1999. To make this judgment, we assumed that the typical life span of a passenger aircraft was 30 years, while the typical life span of a cargo aircraft was 40 years. We assigned a hushkitting cost of zero to these aircraft. Next, we assumed that the cost of replacing an aircraft earlier than it would have otherwise been retired was the incremental cost of retiring the aircraft early–that is, the cost of borrowing the capital for the replacement aircraft earlier than would have normally occurred. In 54 cases, we found that the aircraft were so close to retirement that the lowest cost option to comply with Stage 3 noise standards was the cost of capital expended before the anticipated retirement date to purchase a new aircraft. Lastly, for 611 aircraft, we determined that the cost of hushkitting was an appropriate estimate for the cost of complying with ANCA because the incremental capital cost to finance the early purchase of aircraft was greater. To estimate the cost of hushkitting an aircraft, we obtained data on hushkit base prices or a range of base prices, installation costs, additional maintenance costs and hours, and performance gains or losses from hushkit manufacturers. These data were generally available by aircraft model type. We applied the actual hushkit cost, or range of costs, for a particular model of aircraft and the cost of installing the hushkit. For the aircraft whose specific hushkit model cost data we could not obtain, we applied the average cost, or range of costs, of those hushkits available for the aircraft model and type. Because all hushkit manufacturers reported that increased maintenance was negligible, we did not include any cost in our estimate for changes in maintenance once an aircraft was compliant with Stage 3 noise standards. In addition, we did not include costs for downtime to install the hushkit. Although several airlines stated that hushkitting could not be scheduled during regular maintenance, we only received downtime cost information from one airline. An Air Transport Association official indicated that such information was not available for other airlines because of business confidentiality concerns. The one airline estimated that the cost to their business as the result of downtime amounted to $31 million, about 7 percent of the total cost to hushkit their fleet. As a result, our estimate of the cost to the airlines to meet Stage 3 noise standards may be slightly higher if information on downtime costs was universally available. Generally, hushkit manufacturers also reported that performance changes after an aircraft was hushkitted were negligible, so we did not include a cost estimate of these factors in our calculations. Some hushkit manufacturers, however, did report slight speed decreases, weight increases, and/or fuel burn increases. The use of engine upgrades for Boeing 747 aircraft to meet Stage 3 noise standards also resulted in slight fuel burn increases. In addition to those named above, Beverly Ann Bendekgey, David K. Hooper, Arthur L. James, Kieran E. McCarthy, Mark E. Stover, and John A. Thomson made key contributions to this report. | The transition to quieter aircraft required by the Airport Noise and Capacity Act of 1990 was expected to benefit communities, airports, and airlines. In turn, the transition was expected to reduce community opposition to airport operations and expansion and to reduce the demand for funds provided for noise abatement through federal grants and user charges. The results expected from the transition to quieter aircraft were partially realized. The transition occurred as planned and considerably reduced the population exposed to noise levels incompatible with residential living. Nevertheless, noise concerns remain a barrier to airport expansion, and the demand for federally authorized support for noise abatement efforts has continued. GAO identified two key issues for review by the aviation community. First, even though fewer people are exposed to aircraft noise, according to a survey in 1999-2000, more than half of the noise complaints came from people living in areas exposed to noise levels that FAA considers compatible with residential living. Second, if people are allowed to move to areas close to an airport, they may later find themselves exposed to noise levels that FAA considers incompatible with residential living as the airport's operations grow to meet rising demands. Furthermore, residential development in such areas could generate new opposition to airports operations and future expansion plans. |
Opioids are drugs that slow down the actions of the body, such as breathing and heartbeat, by binding with certain receptors in the body. Some patients are prescribed opioids to treat pain. Opioid medications are available as immediate or extended release and in different forms, such as a pill, liquid, or a patch worn on the skin. Over time, the body becomes tolerant to them, which means that larger doses are needed to achieve the same effect. People may use opioids in a manner other than as prescribed—that is, they can be abused or misused. Because opioids are highly addictive substances, they can pose serious risks when they are abused or misused, which can lead to addiction and cause death. Symptoms of opioid addiction include a strong desire for opioids, inability to control or reduce use, and continued use despite interference with major obligations or social functioning, among others. Three medications are currently approved by the FDA for use in MAT for opioid addiction—methadone, buprenorphine, and naltrexone. Methadone: Methadone is a full opioid agonist, meaning it binds to and activates opioid receptors to help prevent withdrawal symptoms and reduce drug craving. It has a long history of use in treatment of opioid dependence in adults. It suppresses withdrawal symptoms in detoxification therapy, which involves stabilizing patients who are addicted by withdrawing them in a controlled manner. Methadone also controls the craving for opioids in maintenance therapy, which is ongoing therapy meant to prevent relapse and increase treatment retention. It can be administered to patients as an oral solution or in tablet form. Methadone also carries risk of abuse. Buprenorphine: Buprenorphine is a partial opioid agonist, meaning it binds to opioid receptors and activates them, but not as much as full opioid agonists. It reduces or eliminates opioid withdrawal symptoms, including drug cravings, and it may do so without producing the euphoria or dangerous side effects of heroin and other opioids. It can be used for detoxification treatment and maintenance therapy. It is available in tablet form or film for sublingual (under the tongue) administration both in a stand-alone formulation and in combination with another agent called naloxone, and as a subdermal (under the skin) implant. Buprenorphine also carries risk of abuse. Naltrexone: Naltrexone is an opioid antagonist, meaning it binds to opioid receptors but does not activate them. It is used for relapse prevention following complete detoxification from opioids. Naltrexone prevents opioid drugs from binding to and activating opioid receptors, thus blocking the euphoria the user would normally feel and causing severe withdrawal symptoms if recent opioid use has occurred. It can be taken orally in tablets or as a once-monthly injection given in a doctor’s office. Naltrexone carries no known risk of abuse. Two of the three medications used to treat opioid addiction—methadone and buprenorphine—are controlled substances and are governed at the federal level by the Controlled Substances Act (CSA). Enacted in 1970, the CSA and its implementing regulations establish a framework through which the federal government regulates the use of these substances for legitimate medical, scientific, research, and industrial purposes, while preventing them from being diverted for illegal purposes. The CSA assigns controlled substances—including narcotics, stimulants, depressants, hallucinogens, and anabolic steroids—to one of five schedules based on the substance’s medical use, potential for abuse, and risk of dependence. Schedule I contains substances that have no currently accepted medical use and may not be manufactured, distributed, or dispensed under federal law. In contrast, Schedules II, III, IV, and V include substances that have recognized medical uses and may be manufactured, distributed, and dispensed in accordance with the CSA. The order of the schedules reflects substances that are progressively less dangerous and addictive, as shown in table 1 below. When used for pain management, methadone and buprenorphine are regulated under federal laws and regulations that apply to controlled substances generally and do not impose requirements unique to methadone or buprenorphine. However, certain requirements—such as restrictions on prescriptions—vary based on the schedule in which a controlled substance is classified. Methadone, like oxycodone (i.e., OxyContin), is a Schedule II controlled substance, which has the highest potential for abuse among scheduled drugs with an accepted medical use. Buprenorphine is a Schedule III controlled substance, which has currently accepted medical uses and a lower potential for abuse. The CSA requires practitioners who dispense, administer, or prescribe methadone or buprenorphine and all other controlled substances in Schedules II-V to register with DEA. In order to be registered, an applicant must meet certain criteria, including being licensed or otherwise authorized to dispense, administer, or prescribe controlled substances under the laws of the state in which they practice. Practitioners must reapply for this registration every three years. The CSA also imposes certain requirements regarding the issuance of prescriptions for methadone and buprenorphine; these requirements vary depending on the controlled substance’s schedule. For example, when used for pain management, methadone and other Schedule II controlled substances may typically only be dispensed by pharmacists based on a written or electronic prescription. In contrast, buprenorphine and other Schedule III controlled substances may be dispensed based on a written, electronic, or oral prescription (i.e., a practitioner calls a pharmacist with the prescription). In addition, when used for pain management, prescriptions for Schedule II controlled substances may not be refilled, whereas prescriptions for Schedule III controlled substances may be refilled up to five times within six months after the date of the original prescription. All prescriptions for controlled substances—regardless of their schedule—must be issued for a legitimate medical purpose by a registered practitioner acting in the usual course of professional practice. However, certain CSA requirements do not apply when Schedule II-IV controlled substances such as methadone and buprenorphine are used for pain management. For example, the CSA’s inventory and recordkeeping requirement—which requires certain practitioners to maintain inventories of controlled substances and to make those inventories available for inspection for at least two years—generally does not apply when a practitioner prescribes or administers Schedule II-V controlled substances in the lawful course of professional practice for pain management purposes. When used for opioid addiction treatment, the CSA and implementing regulations issued by DEA and SAMHSA impose requirements in addition to those that generally apply when methadone and buprenorphine are used to treat pain. See table 2 for a comparison of these requirements. Prescriptions cannot be issued for methadone when used for opioid addiction treatment. Therefore, when used for that purpose, methadone may generally only be administered or dispensed within an OTP. Under the CSA, OTPs must be certified by SAMSHA and registered by DEA. To be eligible for full certification, an OTP must first be accredited by a SAMHSA-approved accrediting organization. Accreditation is a peer- review process in which an accrediting organization evaluates an OTP by making site visits and reviewing policies, procedures, and practices. Once accredited, SAMHSA may certify an OTP if it determines that the OTP conforms with federal regulations governing opioid treatment standards. Among other things, federal opioid treatment standards set forth patient admission criteria, recordkeeping guidelines, and required services, such as counseling. Once certified by SAMHSA, the OTP must apply for a separate registration from DEA—that is, a registration distinct from and in addition to the previously described DEA registration generally required of all practitioners who administer, dispense, or prescribe controlled substances. In order to register an OTP, DEA must determine that the OTP will comply with any applicable DEA requirements regarding the security of the stocks of controlled substances being used for treatment, as well as inventory and recordkeeping requirements. OTP registration from DEA must be renewed annually. With limited exceptions, OTPs must administer methadone while patients are at the OTP facility. Federal opioid treatment standards permit patients to receive a single take-home dose for a day when an OTP is closed, including weekends and federal holidays. The medical director of an OTP may also allow certain patients to take home a specific number of doses based on the duration of the treatment the patient has completed. OTPs are required to maintain current procedures adequate to identify the theft or diversion of take-home medications. Methadone could also be used outside of an OTP, such as in an emergency room, under an exception known as the “3-day rule,” which permits a practitioner who is not separately registered as an OTP to administer—but not prescribe— narcotic drugs to a patient to relieve acute withdrawal symptoms while arranging for the patient’s referral to treatment. Like methadone, buprenorphine can be administered or dispensed in an OTP when used for addiction treatment. In addition, qualifying practitioners who receive a Drug Addiction Treatment Act of 2000 (DATA 2000) waiver may dispense or prescribe buprenorphine for opioid addiction treatment to a limited number of patients in an outpatient setting, such as a doctor’s office. Until recently, only physicians were eligible to receive a DATA 2000 waiver. On July 22, 2016, the Comprehensive Addiction and Recovery Act of 2016 amended the CSA to also permit qualifying nurse practitioners and physicians’ assistants to receive a DATA 2000 waiver from the date of the act’s enactment until October 1, 2021. To qualify for a waiver, practitioners must be appropriately licensed under state law and have expertise as evidenced by certain certification, training, or experience. In addition, practitioners must have the capacity to refer patients for appropriate counseling and other services. Practitioners who receive a DATA 2000 waiver from SAMHSA may treat 30 patients in their first year under the waiver and may increase to 100 patients after one year upon submission of a notice to the Secretary of HHS. As of August 8, 2016, certain practitioners may be approved to treat up to 275 patients after one year. Practitioners who prescribe or dispense buprenorphine under a DATA 2000 waiver are subject to the CSA’s inventory and recordkeeping requirement. In addition to laws and regulations, several key factors can affect patients’ access to MAT for opioid addiction, according to articles and documents we reviewed and our interviews with stakeholders and agency officials. These factors include (1) the availability of qualified practitioners and their capacity to meet patient demand for MAT; (2) perceptions of MAT and its value among patients, practitioners, and in institutions; and (3) financing issues related to the availability and limits of insurance coverage for MAT. Practitioner availability and capacity. According to literature we reviewed and some stakeholders we interviewed, the number of qualified practitioners—specifically OTPs and physicians with waivers who can prescribe buprenorphine—available to offer MAT services for opioid addiction may affect patients’ access to this treatment. Further, some of these practitioners may be operating at full capacity, leading to wait lists that can affect patients’ access to MAT. For example, in March 2016, SAMHSA reported that there were approximately 1,400 OTPs. However, a 2014 SAMHSA brief and several stakeholders and articles stated that opioid-dependent individuals may not be able to access MAT due to lack of nearby OTPs, which are mostly concentrated in urban areas. As a result, some stakeholders told us that patients in rural areas have to travel several hours on a daily basis to seek treatment because they generally need to be at the OTP every day to take methadone. When OTPs are accessible, one article estimated that the majority of them are operating at 80 percent or more capacity, suggesting that they would not be able to handle a significant number of new patients, and another article noted that OTPs can have extensive waitlists. Some articles stated that OTP capacity is limited by factors such as funding limitations and a variety of state and local requirements. In addition, several articles and a 2014 SAMHSA brief highlighted that the availability of buprenorphine, after FDA approval in 2002, helped to expand access to MAT for opioid addiction but also noted that patients’ access is impeded by the availability of physicians and the limits on the number of patients that each physician can treat. In March 2016, SAMHSA reported that there were approximately 32,000 physicians with DATA 2000 waivers to prescribe buprenorphine. According to one article, use of buprenorphine in OTPs has been limited, and another article noted that, in 2011, 43 percent of counties in the United States had no physicians with waivers who could prescribe buprenorphine as part of MAT. In March 2016, SAMHSA also reported that there is substantial geographic variation in the capacity to prescribe buprenorphine, including shortages of physicians, primarily in rural areas. Several articles and stakeholders noted that these patient limits can affect provider capacity and restrict access to MAT, even after the CSA was amended in 2006 to increase the maximum number of patients per physician from 30 patients to 100 patients after the first year. For example, several stakeholders told us that patients experience waiting lists for treatment when physicians are treating their maximum number of patients. One article noted that because there are many areas of the country that have an insufficient number of physicians, the result is that many people needing treatment may remain on waitlists for weeks or months. It added that prolonged waitlists are associated with reduced likelihood of treatment entry. Perceptions of MAT and its value. Several stakeholders, articles, and documents reported that perceptions of MAT, such as perceived stigma among patients and questions about its value among practitioners and institutions, can affect patients’ access to MAT. Eight articles we reviewed noted that a perceived stigma about the use of MAT—especially methadone—among patients can make them reluctant to seek treatment, subsequently leading to social isolation and undermining the chances of long-term recovery. Another article noted that OTPs experience discrimination, such as community opposition, because they offer onsite medical care to people who are dependent on opioids. Another article stated that because of this perceived stigma, there is a desire among some patients to avoid OTPs to limit interactions with others who may be drug users and to avoid daily attendance requirements. This perception often makes buprenorphine—a MAT medication that can be prescribed in an office-based setting—a more attractive treatment option to many patients. In addition, some practitioners may be reluctant to provide MAT based on beliefs about the value of using medications for treating addiction. For example, some articles and a 2014 SAMHSA report found that despite science-based evidence regarding the effective use of MAT, some practitioners do not believe there is a role for medications in the treatment of addiction disorders. Similarly, according to several articles we reviewed and stakeholders we interviewed, many practitioners believe in the efficacy of abstinence-based treatment—when patients are treated without medication—to treat addiction, even though research indicates that abstinence fails a large proportion of the time and is generally less effective than MAT. Several documents and the literature we reviewed examined the reasons why MAT is not used more frequently within the criminal justice system. For example, a 2011 Legal Action Center report, a 2014 SAMHSA brief, and two articles cited various reasons why drug courts and other sentencing officials deny access to MAT. These included a lack of understanding about the nature of addiction and MAT, such as the belief that MAT is substituting one addiction for another. In addition, some judges may view opioid addiction as a social problem that is best addressed through abstinence. The 2014 SAMHSA brief and some reports and articles we reviewed show that institutions within the criminal justice system have policies that limit MAT, and these policies may be influenced by both negative perceptions of MAT and other factors, such as concerns over the risk of diversion. For example, some of these documents noted that some drug courts have policies that prohibit participants from using any controlled substances, which would include MAT. Some stakeholders, documents, and an article we reviewed highlighted education as a key mechanism that can help reduce the perceived stigma of MAT and its value, including among those within the criminal justice system. For example, some stakeholders told us that they organize town hall meetings and workshops to educate their communities about the importance of MAT. The stakeholders explained that these efforts are opportunities to help educate patients and practitioners about MAT. Also, some documents and an article we reviewed on MAT and the criminal justice system noted that peoples’ views about MAT can be addressed through education. Availability and limits of insurance coverage. According to several stakeholders and articles we reviewed, financing of treatment is a key factor that can affect patients’ access to MAT. Specifically, these sources show that the availability and limits of insurance coverage for MAT can create access challenges for patients who lack insurance, as well as for those with insurance. For example, patients with no insurance coverage for MAT may face prohibitive out-of-pocket costs that may limit their access to it. According to one article, a month’s supply of a daily dose of sublingual buprenorphine may cost such patients between $200 and $450 per month. According to another article, access to injectable naltrexone among the uninsured is also limited due to costs that can range from $750 to $1,200 a month. A third article that reviewed available literature on MAT found that the monthly cost of injectable naltrexone is significantly higher than that of the other MAT medications— buprenorphine and methadone. Because of this, the article noted that cost is often a factor that practitioners consider when determining whether to prescribe naltrexone to a patient. For individuals with insurance, the benefit coverage for MAT-related services can vary by insurance plan and by state. According to one article, some private health insurance plans do not cover buprenorphine treatment, or they impose limits on the length of treatment with buprenorphine. Some sources reported that lifetime coverage limits for buprenorphine can range from 12 months to 36 months, even though some patients may need access to the medications for the rest of their lives to prevent relapse. Similarly, a 2014 SAMHSA report found that although state Medicaid programs reimburse for at least one of the three MAT medications, most states did not reimburse for all three. In some cases, state Medicaid programs also limit the length of time that the medications can be used. Although Medicaid expansion allowed under the Patient Protection and Affordable Care Act could increase the number of individuals with coverage for substance abuse treatment, including MAT, the specific coverage can vary by state. We have previously examined access to behavioral health treatment—which can include MAT—in 10 states, and found that officials in 2 of the states that expanded Medicaid reported that the availability of behavioral health treatment has generally increased, although some concerns about access remain. Specifically, officials in these states reported difficulties providing Medicaid enrollees with access to certain MAT medications due to lack of physicians willing to prescribe these drugs for Medicaid enrollees. We provided a draft of this report to the Office of National Drug Control Policy, HHS, and the Department of Justice. The Office of National Drug Control Policy agreed with the report’s findings, and the office’s comments are reprinted in appendix II. The Office of National Drug Control Policy and HHS provided technical comments, which we incorporated as appropriate. The Department of Justice had no comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Department of Justice, HHS, the Office of National Drug Control Policy, and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. To identify and describe published research on key factors that affect access to medication-assisted treatment (MAT) for opioid addiction, we conducted a literature search for relevant articles published in peer- reviewed and scholarly journals from January 2011 through April 2016. We searched the following databases for relevant articles in peer- reviewed and scholarly journals: ABI/Inform, PubMed, Biosis Databases, ProQuestDialog, PsychInfo, LexisNexis, and Law Reviews Combined. Key search terms included various combinations of “opioid addiction,” “opioid use,” “opioid dependence,” “medication assisted treatment,” “methadone,” “buprenorphine,” and “naltrexone.” From all database sources, 188 articles were identified. We first reviewed the abstracts for each of these articles for relevancy in determining key factors that affect access to MAT for opioid addiction. For those abstracts we found relevant, we obtained and reviewed the full article and excluded those where the article was (1) published prior to 2011; (2) a duplicate of another article; (3) an editorial or commentary; (4) a dissertation; (5) not focused on the use of MAT within the United States; or (6) not focused on factors affecting patients’ use of MAT. After excluding these articles, 50 articles remained. We also independently identified three additional articles as we reviewed related documentation. For a complete list of the articles, see the bibliography at the end of this report. As part of our work, we examined the methodologies of all identified studies and determined that they were sufficiently reliable for the purposes of our report. Elizabeth H. Curda, Acting Director, (202) 512-7114 or [email protected]. In addition to the contact name above, Linda Kohn, Director; Will Simerl, Assistant Director; Natalie Herzog, Analyst-in-Charge; La Sherri Bush; and Emily Wilson made key contributions to this report. Also contributing were Joanna Berry, Christine Davis, Krister Friday, Cherie’ Starck, and Eric Wedum. Abraham, A. J., H. K. Knudsen, T. Rieckmann, and P. M. Roman. “Disparities in Access to Physicians and Medications for the Treatment of Substance Use Disorders Between Publicly and Privately Funded Treatment Programs in the United States.” Journal of Studies of Alcohol and Drugs, vol. 74 (2013). Aletraris, L., M. B. Edmond, M. Paino, D. Fields, and P. M. Roman. “Counselor Training and Attitudes toward Pharmacotherapies for Opioid Use Disorder.” Substance Abuse, vol. 37, no. 1 (2016). Aletraris, L., M. B. Edmond, and P. M. Roman. “Adoption of Injectable Naltrexone in U.S. Substance Use Disorder Treatment Programs.” Journal of Studies on Alcohol and Drugs, vol. 76 (2015). Andrews, C. M., T.A. D’Aunno, H. A. Pollack, and P. D. Friedmann. “Adoption of Evidence-Based Clinical Innovations: The Case of Buprenorphine Use by Opioid Treatment Programs.” Medical Care Research and Review, vol. 71 (2014). Barnes, M. C. and S. L. Worthy. “Achieving Real Parity: Increasing Access to Treatment for Substance Use Disorders under the Patient Protection and Affordable Care Act and the Mental Health and Addiction Equity Act.” University of Arkansas at Little Rock Law Review (2014). Bonhomme, J., R. S. Shim, R. Gooden, D. Tyus, and G. Rust. “Opioid Addiction and Abuse in Primary Care Practice: A Comparison of Methadone and Buprenorphine as Treatment Options.” Journal of National Medical Association, vol. 104, no. 7-8 (2012). Cheever, L. W., T.F. Kresina, A. Cajina, and R. Lubran. “A Model Federal Collaborative to Increase Patient Access to Buprenorphine Treatment in HIV Primary Care.” J. Acquir. Immune Defic. Syndrome, vol. 56, supp. 1 (2011). Chou, R., R. A. Cruciani, D. A. Fiellin, P. Compton, J. T. Farrar, M. C. Haigney, C. Inturrisi, J. R. Knight, S. Otis-Green, S. M. Marcus, D. Mehta, M. C. Meyer, R. Portenoy, S. Savage, E. Strain, S. Walsh, and L. Zeltzer. “Methadone Safety: A Clinical Practice Guideline from the American Pain Society and College on Problems of Drug Dependence, in Collaboration with the Heart Rhythm Society.” The Journal of Pain, vol. 15, no. 4 (2014). Connery, H. S. “Medication-Assisted Treatment of Opioid Use Disorder: Review of the Evidence and Future Directions.” Harvard Review of Psychiatry, vol. 23, no. 2 (2015). Cousins, S. J., L. Denering, D. Crevecoeur-MacPhail, J. Viernes, W. Sugita, J. Barger, T. Kim, S. Weinmann, and R. Rawson. “A Demonstration Project Implementing Extended-Release Naltrexone in Los Angeles County.” Substance Abuse, vol. 37, no. 1 (2016). DeFulio, A., J. J. Everly; S. Leoutsakos, A. Umbricht; and M. Fingerhood. “Employment-Based Reinforcement of Adherence to an FDA Approved Extended Release Formulation of Naltrexone in Opioid-Dependent Adults: A Randomized Controlled Trial.” Drug and Alcohol Dependence, vol. 120 (2012). D’Onofrio, G., P. G. O’Connor, M. V. Pantalon, M. C. Chawarski, S. H. Busch, P. H. Owens, S. L. Bernstein, and D. A. Fiellin. “Emergency Department-Initiated Buprenorphine/Naloxone Treatment for Opioid Dependence: A Randomized Clinical Trial.” JAMA, vol. 313, no. 16 (2015). Farmer, C. M., D. Lindsay, J. Williams, A. Ayers, J. Schuster, A. Cilia, M. T. Flaherty, T. Mandell, A. J. Gordon, and B. D. Stein. “Practice Guidance for Buprenorphine for the Treatment of Opioid Use Disorders: Results of an Expert Panel Process.” Substance Abuse, vol. 36, no. 2 (2015). Fox, A. D., A. Chamberlain, T. Frost, and C. O. Cunningham. “Harm Reduction Agencies as a Potential Site for Buprenorphine Treatment.” Substance Abuse, vol. 36, no. 2 (2015). Fox, A. D., J. Maradiaga, L. Weiss, J. Sanchez, J. L. Starrels, and C. O. Cunningham. “Release from Incarceration, Relapse to Opioid Use and the Potential for Buprenorphine Maintenance Treatment: A Qualitative Study of the Perceptions of Former Inmates with Opioid Use Disorder.” Addiction Science & Clinical Practice, vol. 10, no. 2 (2015). Fox, A. D., P. A. Shah, N. L. Sohler, C. M. Lopez, J. L. Starrels, and C. O. Cunningham. “I Heard About It From a Friend: Assessing Interest in Buprenorphine Treatment.” Substance Abuse, vol. 35 (2014). Friedmann, P. D., D. Wilson, H. K. Knudsen, L. J. Ducharme, W. N. Welsh, L. Frishman, K. Knight, H. J. Lin, A. James, C. E. Albizu-Garcia, J. Pankow, E. A. Hall, T. F. Urbine, S. Abdel-Salam, J. L. Duvall, and F. J. Vocci. “Effect of an Organizational Linkage Intervention on Staff Perceptions of Medication-Assisted Treatment and Referral Intentions in Community Corrections.” Journal of Substance Abuse Treatment, vol. 50 (2015). Fu, J. J., N. D. Zaller, M. A. Yokell, A. R. Bazazi, and J. D. Rich. “Forced Withdrawal from Methadone Maintenance Therapy in Criminal Justice Settings: A Critical Treatment Barrier in the United States.” Journal of Substance Abuse Treatment, vol. 44 (2013). Green, C. A., D. McCarty, J. Mertens, F. L. Lynch, A. Hilde, A. Firemark, C. M. Weisner, D. Pating, and B.M. Anderson. “A Qualitative Study of the Adoption of Buprenorphine for Opioid Addiction Treatment.” Journal of Substance Abuse Treatment, vol. 46 (2014). Hall, G., C.J. Neighbors, J. Iheoma, S. Dauber, M. Adams, R. Culleton, F. Muench, S. Borys, R. McDonald, and J. Morgenstern. “Mobile Opioid Agonist Treatment and Public Funding Expands Treatment for Disenfranchised Opioid-Dependent Individuals.” Journal of Substance Abuse Treatment, vol. 46 (2014). Homenko, H. “Rehabilitating Opioid Regulation: A Prescription for the FDA’s Next Proposal of An Opioid Risk Evaluation and Mitigation Strategy (REMS).” Health Matrix: Journal of Law Medicine (2012). Jones, C. M., M. Campopiano, G. Baldwin, and E. McCance-Katz. “National and State Treatment Need and Capacity for Opioid Agonist Medication-Assisted Treatment.” American Journal of Public Health, vol. 105, no. 8 (2015). Kelch, B.P. and N.J. Piazza. “Medication-Assisted Treatment: Overcoming Individual Resistance Among Members in Groups Whose Membership Consists of Both Users and Nonusers of MAT: A Clinical Review.” Journal of Groups in Addiction & Recovery, vol. 6 (2011). Kelly, S.M., B.S. Brown, E.C. Katz, K.E. O’Grady, S.G. Mitchell, S. King, and R.P. Schwartz. “A Comparison of Attitudes Toward Opioid Agonist Treatment among Short-Term Buprenorphine Patients.” American Journal of Drug and Alcohol Abuse, vol. 38, no. 3 (2012). Knudsen, H. K. “The Supply of Physicians Waivered to Prescribe Buprenorphine for Opioid Use Disorders in the United States: A State- Level Analysis.” Journal of Studies on Alcohol and Drugs, vol. 76 (2015). Knudsen, H. K., A. J. Abraham, and P. M. Roman. “Adoption and Implementation of Medication in Addiction Treatment Programs.” J. Addict. Med., vol. 5, no. 1 (2011). Knudsen, H. K., M. R. Lofwall, J. R. Havens, and S. L. Walsh. “States’ Implementation of the Affordable Care Act and the Supply of Physicians Waivered to Prescribe Buprenorphine for Opioid Dependence.” Drug and Alcohol Dependence, vol. 157 (2015). Kraus, M., D. P. Alford, M. M. Kotz, P. Levounis, T. W. Mandell, M. Meyer, E. A. Salsitz, N. Wetterau, and S. A. Wyatt. “Statement of the American Society of Addiction Medicine Consensus Panel on the Use of Buprenorphine in Office-Based Treatment of Opioid Addiction.” J. Addict. Med., vol. 5, no. 4 (2011). Kresina, T. F. and R. Lubran. “Improving Public Health Through Access to and Utilization of Medication Assisted Treatment.” International Journal of Environmental Research and Public Health, vol. 8 (2011). LaBelle, C. T., S. C. Han, A. Bergeron, and J. H. Samet. “Office-Based Opioid Treatment with Buprenorphine (OBOT-B): State-wide Implementation of the Massachusetts Collaborative Care Model in Community Health Centers.” Journal of Substance Abuse Treatment, vol. 60 (2016). Lee, J., T. F. Kresina, M. Campopiano, R. Lubran, and H. W. Clark. “Use of Pharmacotherapies in the Treatment of Alcohol Use Disorders and Opioid Dependence in Primary Care.” BioMed Research International, vol. 2015 (2015). Maradiaga, J. A., S. Nahvi, C. O. Cunnigham, J. Sanchez, and A. D. Fox. ““I Kicked the Hard Way. I Got Incarcerated. Withdrawal from Methadone During Incarceration and Subsequent Aversion to Medication Assisted Treatments.” Journal of Substance Abuse Treatment, vol. 62 (2016). Mastusow, H., S. L. Dickman, J. D. Rich, C. Fong, D. M. Dumont, C. Hardin, D. Marlowe, and A. Rosenblum. “Medication Assisted Treatment in US Drug Courts: Results from a Nationwide Survey of Availability, Barriers, and Attitudes.” Journal of Substance Abuse Treatment, vol. 44 (2013). Maxwell, J.C. “The Pain Reliever and Heroin Epidemic in the United States: Shifting Winds in the Perfect Storm.” Journal of Addictive Diseases, vol. 34 (2015). Mitchell, S. G., J. Willet, L. B. Monico, A. James, D. S. Rudes, J. Viglioni, R. P. Schwartz, M. S. Gordon, and P. D. Friedmann. “Community Correctional Agents’ Views of Medication-Assisted Treatment: Examining their Influence on Treatment Referrals and Community Supervision Practices.” Substance Abuse, vol. 37, no. 1 (2016). Monico, L. B., J. Gryczynski, S. G. Mitchell, R. P. Schwartz, K. E. O’Grady, J. H. Jaffe. “Buprenorphine Treatment and 12-step Meeting Attendance: Conflicts, Compatibilities, and Patient Outcomes.” Journal of Substance Abuse Treatment, vol. 57 (2015). Paone, D., E. Tuazon, M. Stajic, B. Sampson, B. Allen, S. Mantha, and H. Kunins. “Buprenorphine Infrequently Found in Fatal Overdose in New York City.” Drug and Alcohol Dependence, vol. 155 (2015). Parrino, M. W., A. G. I. Maremmani, P. N. Samuels, and I. Maremmani. “Challenges and Opportunities for the Use of Medications to Treat Opioid Addiction in the United States and Other Nations of the World.” Journal of Addictive Diseases, vol. 34, no. 2-3 (2015). Pecoraro, A. and G. E. Woody. “Medication-Assisted Treatment for Opioid Dependence: Making a Difference in Prisons.” F1000 Medicine Reports, vol. 3, no. 1 (2011). “Proceedings of the AMCP Partnership Forum: Breaking the Link Between Pain Management and Opioid Use Disorder.” Journal of Managed Care & Specialty Pharmacy, vol. 21, no. 2 (2015). Roman, P. M., Abraham, A. J., and H. K. Knudsen. “Using Medication- Assisted Treatment for Substance Use Disorders: Evidence of Barriers and Facilitators of Implementation.” Addictive Behaviors, vol. 36 (2011). Sansone, R. A. and L. A. Sansone. “Buprenorphine Treatment for Narcotic Addiction: Not Without Risks.” Innovations in Clinical Neuroscience, vol. 12, no. 3-4 (2015). Sarpatwari, A. “Just Say No: The Case Against the Reclassification of Buprenorphine.” University of Maryland Law Journal of Race, Religion, Gender and Class (2012). Sigmon, S. C. “Interim Treatment: Bridging Delays to Opioid Treatment Access.” Preventative Medicine, vol. 80 (2015). Sigmon, S. C. “The Untapped Potential of Office-Based Buprenorphine Treatment.” JAMA Psychiatry, vol. 72, no. 4 (2015). Stancliff, S., H. Joseph, C. Fong, T. Furst, S. D. Comer, and P. Roux. “Opioid Maintenance Treatment as a Harm Reduction Tool for Opioid- Dependent Individuals in New York City: The Need to Expand Access to Buprenorphine in Marginalized Populations.” Journal of Addictive Diseases, vol. 31, no. 3 (2012). Stein, B.D., A.J. Gordon, A.W. Dick, R.M. Burns, R.L. Pacula, C.M. Farmer, D.L. Leslie, and M. Sorbero. “Supply of Buprenorphine Waivered Physicians: The Influence of State Policies.” Journal of Substance Abuse Treatment, vol. 48 (2015). Stein, B. D., R. L. Pacula, A. J. Gordon, R. M. Burns, D. L. Leslie, M. J. Sorbero, S. Bauhoff, T. W. Mandell, and A. W. Dick. “Where is Buprenorphine Dispensed to Treat Opioid Use Disorders? The Role of Private Offices, Opioid Treatment Programs, and Substance Abuse Treatment Facilities in Urban and Rural Counties.” The Milbank Quarterly, vol. 93, no. 3 (2015). Stieker, L. H., K. Comstock, S. Arechiga, J. Mena, M. Hutchins-Jackson, K. Kelly. “Medication Assisted Treatment (MAT): A Dialogue With a Multidisciplinary Treatment Team and Their Patients.” Journal of Social Work Practice in the Addictions, vol. 13 (2013). Teruya, C., R. P. Schwartz, S. G. Mitchell, A. L. Hasson, C. Thomas, S. H. Buoncristiani, Y. Hser, K. Wiest, A.J. Cohen, N. Glick, P. Jacobs, and W. Ling. “Patient Perspectives on Buprenorphine/Naloxone: A Qualitative Study of Retention During the Starting Treatment with Agonist Replace Therapies (START) Study.” Journal of Psychoactive Drugs, vol. 46, no. 5 (2014). Turner, L., S. P. Kruszewski, and G. C. Alexander. “Trends in the Use of Buprenorphine by Office-Based Physicians in the United States, 2003– 2013.” The American Journal on Addictions, vol. 24 (2015). White, W. L., M. D. Campbell, C. Shea, H. A. Hoffman, B. Crissman, and R. DuPont. “Coparticipation in 12-Step Mutual Aid Groups and Methadone Maintenance Treatment: A Survey of 322 Patients.” Journal of Groups in Addiction & Recovery, vol. 8 (2013). Yarborough, B. H., S. P. Stumbo, D. McCarty, J. Mertens, C. Weisner, and C. A. Green. “Methadone, Buprenorphine and Preferences for Opioid Agonist Treatment: A Qualitative Analysis.” Drug and Alcohol Dependence, vol. 160 (2016). | The abuse of prescription opioid pain relievers and illicit opioids, such as heroin, have contributed to increasing numbers of overdose deaths in the United States, and Centers for Disease Control and Prevention data show more than 28,000 opioid overdose deaths in 2014. Research has shown that MAT can more effectively reduce opioid use and increase treatment retention compared to treatment without medication. GAO was asked to review issues related to patient and provider use of MAT for opioid addiction. GAO examined (1) how federal laws and regulations apply when using medications to treat opioid addiction compared to using the same medications for pain management and (2) key factors that can affect access to MAT for opioid addiction. GAO reviewed federal laws and regulations pertaining to MAT medications and reviewed key documents from HHS and other sources. GAO also identified 53 articles from peer-reviewed and scholarly journals related to MAT for opioid addiction for further examination. GAO interviewed stakeholders from patient and provider groups; experts on the issue of addiction treatment; and officials at relevant federal agencies. GAO provided a draft of this report to the Office of National Drug Control Policy, HHS, and the Department of Justice. The Office of National Drug Control Policy agreed with the report's findings. The Office of National Drug Control Policy and HHS provided technical comments, which GAO incorporated as appropriate. The Department of Justice had no comments. The Department of Health and Human Services (HHS) has stated that addressing opioid abuse is a high priority and is promoting access to medication-assisted treatment (MAT)--an approach that combines behavioral therapy and the use of medications--to combat the problem. Three medications are currently approved for use in MAT for opioid addiction--methadone, buprenorphine, and naltrexone. Methadone and buprenorphine are regulated like other controlled substances under the Controlled Substances Act (CSA) when used to treat pain and have additional requirements that apply when used to treat opioid addiction. The third medication--naltrexone--is not a controlled substance and is therefore not subject to the CSA. Methadone is a Schedule II controlled substance, which indicates a higher risk of abuse. Buprenorphine is a Schedule III controlled substance, with lower risk, and so generally has fewer requirements. For example, when used to treat pain, methadone generally may not be dispensed without a written or electronic prescription. In contrast, buprenorphine may be dispensed based on a written, electronic, or oral (phone) prescription. When used for opioid addiction treatment, the CSA and implementing regulations impose additional requirements for methadone and buprenorphine: Methadone may generally only be administered or dispensed within an opioid treatment program (OTP), as prescriptions for methadone cannot be issued when used for opioid addiction treatment. Buprenorphine may be administered or dispensed within an OTP and may also be prescribed by a qualifying practitioner who has received a waiver from the Substance Abuse and Mental Health Services Administration. Practitioners who received this waiver are limited in the number of patients they may treat for opioid addiction. In addition to laws and regulations, several key factors can affect patients' access to MAT for opioid addiction, according to articles from peer-reviewed and scholarly journals, documents GAO reviewed, and interviews with agency officials and experts. Specifically, through these sources GAO identified the following key factors: The availability of qualified practitioners and their capacity to meet patient demand for MAT . For example, there were approximately 1,400 OTPs in 2016. However, sources GAO reviewed stated that they are lacking in certain locations. Furthermore, some MAT practitioners may be operating at full capacity, leading to wait lists that can affect patients' access to MAT. The perceptions of MAT and its value among patients, practitioners, and institutions . Some practitioners do not believe that MAT is more effective than abstinence-based treatment--when patients are treated without medication--despite science-based evidence, and there are concerns that the medications will be misused. The availability and limits of insurance coverage for MAT . Patients with no insurance coverage for MAT may face prohibitive out-of-pocket costs that may limit their access to it, and coverage for MAT varies for those individuals with insurance. In some cases, state Medicaid programs limit the length of time that patients can use MAT medications. |
To understand the trends in the size of tax expenditures, it is helpful to understand how tax expenditures are defined and how the different types affect taxpayer liability. For this report, we also provide an overview of the broad purposes of tax expenditures—one method the federal government can use to achieve national objectives—and a discussion of how tax expenditures interact with the federal budget. Tax expenditures are revenue losses—the amount of revenue that the government forgoes—resulting from federal tax provisions that grant special tax relief for certain kinds of behavior by taxpayers or for taxpayers in special circumstances. These provisions may, in effect, be viewed as spending programs channeled through the tax system and are classified in the U.S. budget by the same functional categories as other spending, such as energy and health. Tax expenditures are provisions that are exceptions to the “normal structure” of the individual and corporate income tax necessary to collect government revenues. Deciding whether an individual provision should be characterized as a tax expenditure is a matter of judgment, and disagreements about classification stem from different views about what should be included in the income tax base. As a practical matter, the term tax expenditure has been used in the federal budget for three decades, and the tax expenditure concept—while not precisely defined—is a valid representation of one tool that the federal government uses to allocate resources. Both the congressional Joint Committee on Taxation (JCT) and Treasury’s Office of Tax Analysis annually compile a list of tax expenditures and estimates of their cost. (App. III provides additional information on how tax expenditures are measured and reported and perspective on differences among the lists of tax expenditures reported by JCT and Treasury.) Treasury’s tax expenditure estimates are included as an informational supplement to the annual federal budget by the OMB. The revenue loss is estimated for each tax expenditure separately by comparing the revenue raised under current law with the revenue that would have been raised if the single provision did not exist, assuming all other parts of the tax code remain constant and taxpayer behavior is unchanged. Revenue loss estimates are intended to provide information about the value of tax expenditures. However, tax expenditure estimates do not incorporate any behavioral responses and thus do not necessarily represent the exact amount of revenue that would be gained if a specific tax expenditure were repealed. For example, when the consumer interest deduction was phased out gradually beginning in 1987, some taxpayers shifted to interest-deductible home equity loans to finance consumption, thereby affecting the revenue gain from eliminating the consumer interest deduction. In addition to estimating revenue loss, Treasury also measures tax expenditures on an outlay-equivalent basis. Outlay-equivalent estimates represent the amount of budget outlays that would be required if the government were to provide taxpayers with the same after-tax income they receive through the tax expenditure. Outlay-equivalent estimates are often higher than revenue loss estimates to reflect that a comparable outlay program could result in additional taxable income to recipients. Outlay- equivalent estimates are useful to compare tax expenditures and other parts of the federal budget. For example, the outlay-equivalent estimate for the tax exclusion for housing and meal allowances for military personnel reflects the additional pretax income that military personnel would have to be paid to raise their income after federal taxes by the amount of the tax expenditure. The outlay-equivalent estimate can be used to compare this tax expenditure with other outlays for defense compensation on a more consistent basis. The Congressional Budget and Impoundment Control Act of 1974 lists six types of tax expenditures: exclusions, exemptions, deductions, credits, preferential tax rates, and deferral of tax liability. Some tax expenditures apply only to individual taxpayers, such as deductions and exclusions for employer-provided contributions for medical insurance, and some only to corporate taxpayers, such as a tax credit for corporations doing business in U.S. possessions. Other tax expenditures, such as accelerated depreciation, apply both to corporations and to individual taxpayers with income from businesses such as sole proprietorships or partnerships. Table 1 shows examples of each type of tax expenditure and the taxpayer group that may claim a particular type. Figure 1 illustrates how tax expenditures appear on the U.S. Individual Income Tax Return (Form 1040). Exclusions are those items of income that would otherwise constitute a part of the taxpayer’s gross income, but are excluded under a specific provision of the tax code. Exclusions generally do not appear on the Form 1040, and excluded income is not reflected in total reported income. For example, the income tax exclusion of employer contributions to medical insurance premiums and medical care is not reported in a taxpayer’s wages or salaries. An exemption, such as the parent personal exemption for students over age 19 but under age 24, is a reduction in taxable income offered to taxpayers because of their status or circumstances. Deductions are adjustments from adjusted gross income. Deductions claimed before the adjusted gross income line on the Form 1040, such as the tuition and fees deduction (this appears on line 27 in fig. 1), are sometimes called “above-the-line” deductions. Taxpayers may also claim “below-the-line” deductions after the adjusted gross income line; to do so, taxpayers must itemize their deductions. Each type of tax expenditure creates tax savings in different ways and, consequently, reduces federal revenues in different ways. The amount of tax relief per dollar that a taxpayer receives using an exclusion, exemption, or deduction depends on the taxpayer’s marginal tax rate. Generally, the higher the taxpayer’s marginal tax rate, the greater the tax savings from these tax expenditure types. Tax credits reduce tax liability dollar-for- dollar, so the value of a credit is the same regardless of the taxpayer’s marginal tax rate. A nonrefundable tax credit can be used to reduce current year tax liability to zero, and a refundable credit in excess of tax liability results in a cash refund. For preferential tax rates which reduce the tax rate on some forms of income such as capital gains, the tax savings depend on the difference between the preferential rate and a taxpayer’s marginal tax rate. By allowing taxpayers to reduce current tax liability by delaying recognition of some income or accelerating some deductions otherwise attributable to future years, a tax deferral shifts the timing of tax payments and, in effect, provides an interest-free “loan” to the taxpayer. The benefit from a deferral is even greater if the taxpayer expects to face a lower tax bracket in the future. A lower-income taxpayer—with no net annual income or with no current tax liability after claiming the standard deduction and any personal exemptions—would not directly benefit from most tax expenditures other than refundable credits. Some techniques have been used to limit the benefits that taxpayers may receive from individual tax expenditures or groups of them. Congress has controlled the amount of revenue forgone for some tax expenditures by adopting provisions to restrict taxpayers’ eligibility for benefits. For example, the mortgage interest deduction is limited to interest on debt up to $1 million to buy, build, or improve first and second homes and up to $100,000 in home equity debt. Aggregate itemized deductions are reduced by 3 percent of the amount of a taxpayer’s adjusted gross income that exceeds a certain threshold, eliminating 3 cents of itemized deductions for each dollar of income above the threshold for higher-income taxpayers. Some tax expenditures, such as tax-exempt private-activity bonds issued by each state, are subject to volume caps limiting the aggregate amount of benefits available. The alternative minimum tax (AMT) also affects tax expenditures and the amount of the revenue loss for the federal government. The AMT is intended to ensure that taxpayers with income over certain thresholds pay some income tax, no matter how much they claim in certain deductions and credits. Under the AMT, taxpayers may have to add back some tax expenditures that they could otherwise claim under the regular tax system, such as deductions for state and local taxes and home equity loan interest, and they may have to include as income certain tax-exempt bond interest that is excluded under the regular tax system. In addition to raising revenue, the federal income tax has long been used as a tool for accomplishing social and economic objectives. The general objectives of tax expenditures are to encourage particular types of activities (such as saving for retirement, promoting home ownership, investing in certain sectors, or funding research and development) and provide economic relief to selected groups of taxpayers (such as the elderly, the blind, and those with children). Another objective of tax expenditures may also be to adjust for differences in individuals’ ability to pay taxes. For example, if two taxpayers have the same income, but one has a catastrophic illness and costly medical bills (or large casualty and theft losses), the other taxpayer is judged better able to pay income taxes. Some tax expenditures may be enacted to compensate for other provisions of the tax code. For example, advocates of reduced tax rates on capital gains often explain the special treatment of capital gains income as offsetting, in part, the assessment of taxes on the nominal, rather than the real, value of capital gains. The rationale and reasons for a particular tax expenditure may change over time. For example, according to the Congressional Research Service (CRS), the income tax code instituted in 1913 contained a deduction for all interest paid. No distinction was made between business and personal interest expenses, although most interest payments at that time represented business expenses. The legislative history does not indicate that the deductibility of mortgage interest was originally intended to encourage home ownership or subsidize the housing industry. However, over time, encouraging home ownership, stimulating residential construction and maintenance, and encouraging families to save and invest in a major financial asset have all been offered as justifications for the mortgage interest deduction. The tax expenditure tool may substitute for a federal spending program in that the federal government “spends” some of its revenue on subsidies by forgoing taxation on some income. Certain activities may be cheaper and simpler to subsidize through the tax code than by setting up a separate program using a different tool. For example, the incremental administrative and compliance costs to deliver the tax credit for child and dependent care expenses may be relatively low compared to the costs of setting up a separate system for processing child care applications and sending vouchers to those eligible. The administrative infrastructure already exists for the government to collect and remit money to over 131 million individual tax filers and 6 million corporations via the tax system administered by the IRS. In concept, the costs to implement an income- based benefit program through the existing tax system could be lower than to set up separate spending programs to deliver these benefits. In some circumstances, tax expenditures may not be the best policy choice to deliver timely benefits or reach intended populations. For programs that seek to provide benefits within a given year, the annual income measure relevant for income tax purposes may not be the best way to target benefits. Relative to spending programs, tax expenditures are limited in their ability to directly provide benefits to nontaxpayers. For example, tax credits must be refundable to reach low-income individuals who do not pay taxes and otherwise would not be required to file tax returns. Tax expenditures generally do not deliver federal resources directly to state and local governments and tax-exempt nonprofit organizations. The charitable contribution deduction provides an incentive for individual and corporate taxpayers to donate to charitable, religious, educational, and health nonprofit organizations. The deduction, in effect, is a federal grant to the donor that reduces the out-of-pocket cost of giving. The itemized deduction for state and local taxes directly increases an individual taxpayer's after-tax income and thus reduces the after-tax price of state and local taxes. State and local governments receive some of the benefit to the extent that taxpayers may be more willing to pay state and local taxes. Tax expenditures are not necessarily an either/or alternative to federal spending and may be used in combination with federal spending and strategies to achieve national objectives. For example, the HOPE and Lifetime Learning tax credits are used with federal education assistance, such as student loans, all of which help individuals fund higher education. Many tax expenditures are comparable to entitlement programs for which spending is determined by rules for eligibility, benefit formulas, and other parameters rather than by Congress appropriating specific dollar amounts each year. With some exceptions, tax expenditures typically make funds (through reduced taxes) available to all qualified claimants, regardless of how many taxpayers claim the tax expenditures, how much they claim collectively, or how much federal revenue is reduced by these claims. Some tax expenditures resemble other policy tools, such as grants or direct loans. A few tax expenditures are administered like grant programs, allowing for some administrative discretion over who receives funds. For the New Markets Tax Credit (NMTC), those seeking the credit must apply to the Community Development Financial Institutions (CDFI) Fund within Treasury and be chosen by a group of evaluators to receive the tax credit. Like a grant program, the NMTC has a maximum amount that can be allocated by CDFI. Tax expenditures in the form of deferrals resemble loans, because they allow taxpayers to postpone the time when income is recognized for tax purposes or to accelerate the deduction of expenses, both of which effectively lower the amount of income currently subject to tax. Deferrals can result in higher taxes in later years when taxpayers recognize deferred income in later tax years or have fewer deductions to claim than they otherwise would have had; the amount of the deferral is, in effect, analogous to a government loan. Tax expenditures, by definition, reduce federal revenue and thus have implications for income tax rates, federal spending, and the federal budget. To obtain a given amount of revenue, tax expenditures require overall statutory tax rates to be higher. Otherwise, revenues forgone through tax expenditures reduce the revenue base available for funding federal spending programs. From a budgetary perspective, most tax expenditures are comparable to mandatory spending for entitlement programs, in that no further action is required to provide resources for tax expenditures. Tax expenditures do not compete overtly in the annual budget process and, in effect, receive a higher funding priority than discretionary spending subject to the annual appropriations process. Revenues forgone through tax expenditures—unless offset by increased taxes or lower spending— increase the unified budget deficit and federal borrowing from the public (or reduce the unified budget surplus available to reduce debt held by the public). As noted previously, both the executive and legislative branches—by Treasury and JCT, respectively—publish annual lists of tax expenditures and the associated revenue loss, but budgetary decisions generally are not based on these lists. Like any spending program, newly proposed tax expenditures and those subject to expiration, to some extent, are subject to scrutiny, but most tax expenditures are not subject to reauthorization. Tax expenditures may be indirectly controlled to the extent that the Congress aims to achieve any revenue target. The tax committees consider tradeoffs between tax expenditures, tax rates, and other parts of the tax code. In concept, eliminating or limiting an existing tax expenditure—like an existing spending program—would free up resources to reduce tax rates, increase federal spending or other tax expenditures, reduce the deficit, or produce some combination thereof. Conversely, adding a new tax expenditure, expanding an existing tax expenditure, or extending an expiring tax expenditure reduces the resources available to reduce tax rates, fund federal spending and tax expenditures, or reduce the deficit. The overall effect on the unified budget position would depend on the extent to which any change in tax expenditures is offset by adjustments to the tax code or other spending programs. Whether gauged in absolute numbers, by revenues forgone, or in comparison to federal spending or the size of the economy, tax expenditures have been substantial over the last three decades. Between fiscal years 1974 and 2004, tax expenditures doubled in number, and the sum of estimated revenue losses associated with tax expenditures tripled, most of which was accounted for by tax expenditures that were used by individual taxpayers. Since 1981 when outlay-equivalent estimates were first available, the sum of the outlay-equivalent estimates for tax expenditures has been similar in magnitude to discretionary spending, and this sum exceeded total discretionary spending for most years during the last decade. As a share of the U.S. economy, the sum of tax expenditure outlay-equivalent estimates remained relatively stable at about 7.5 percent of GDP since the last major tax reform legislation. Summing the individual tax expenditure estimates is useful for gauging the general magnitude of the federal revenue involved, but it does not take into account possible interactions between the individual tax code provisions. Because of this limitation, sums of tax expenditure estimates must be interpreted carefully. The JCT and Treasury estimate the revenue loss from each tax expenditure separately, assuming that the rest of the tax code remains unchanged. Neither JCT nor Treasury adds tax expenditure estimates, because summing them does not take into account possible interaction effects among the provisions. If two or more tax expenditures were estimated simultaneously, the total change in federal revenue could be smaller or larger than the sum of the amounts shown for each item separately as a result of interactions among the tax expenditure provisions. For example, the repeal of an itemized deduction tax expenditure might cause more taxpayers to take the standard deduction instead of itemizing. However, the revenue loss estimate for any single tax expenditure among the itemized deductions does not reflect this potentially sizeable interaction with the standard deduction. Eliminating several itemized deductions at the same time could cause significant numbers of taxpayers to take the standard deduction, and thus, the decrease in revenue could be less than the sum of the estimated revenue loss estimates for each itemized deduction. To demonstrate the magnitude of possible interactions and the potential implications for summing tax expenditures, Treasury’s Office of Tax Analysis illustrated for us the repeal of five itemized deductions. Based on tax year 2002 data, the sum of the five separate tax expenditure estimates, each calculated assuming the rest of the tax code was unchanged, was over $175 billion. Assuming the simultaneous repeal of all five provisions, Treasury estimated the revenue loss after interaction totaled $131 billion—about 25 percent less than the sum of the separate estimates. According to Treasury, this example cannot be generalized given that some groups of tax provisions have substantial interactions and others do not. For all tax expenditures, the magnitude of the difference between the sum of the estimates and an estimate for all tax expenditures simultaneously is not known. Additionally, tax expenditure estimates developed by Treasury and JCT do not take into account possible behavioral responses by taxpayers if a tax expenditure were repealed. For example, if the HOPE scholarship tax credit—a tax credit for the first 2 years of post-secondary education—were eliminated, taxpayers who would have used that tax credit may instead opt for the Lifetime Learning tax credit or other tax subsidies aimed at higher education. In contrast, certain kinds of behavioral responses, such as changes in the timing of transactions, income recognition, or shifts between sectors of the economy, are taken into account when JCT and Treasury prepare revenue estimates for proposed legislation. Potential macroeconomic effects, such as changes to GDP, are not reflected in tax expenditure revenue loss estimates or in revenue estimates for proposed legislation. To some extent, the same kinds of challenges in interpreting tax expenditure estimates also exist in projecting the costs of spending programs. Budget line items generally do not reflect the actual budget savings to be gained by abolishing specific programs or groups of programs. For instance, eliminating all veterans’ benefits would reduce the federal budget by less than the amount currently spent on those programs because spending likely would increase in food stamps, Medicaid, and other entitlement programs. Although interaction effects also occur for spending programs, Treasury officials responsible for developing tax expenditure estimates told us that the bias in summing tax expenditure revenue loss estimates likely is greater than the bias for outlay projections. Whereas historical data are reported for federal budget receipts and outlays, the last available values for tax expenditures remain estimates. Treasury’s last reported re-estimates for past fiscal years reflect legislation enacted, prevailing economic conditions, and the latest taxpayer data available at the time of estimation. Projections of the future costs of tax expenditures are more uncertain than projections for future tax receipts or outlays because it is not known with certainty, even after the fact, how much was spent for any given tax expenditure. Despite the limitations in summing separate tax expenditure revenue loss and outlay-equivalent estimates, these are the best available data to measure the value of tax expenditures and make comparisons to other spending programs. Summing the estimates provides perspective on the use of tax expenditures as a policy tool and represents a useful gauge of the general magnitude of government subsidies carried out through the tax code. The estimates also can be used to compare tax expenditures to federal spending overall and by budget function. Other researchers also have summed tax expenditure estimates to help gain perspective on the use of this policy tool and examine trends in the aggregate growth of tax expenditure estimates over time. Between 1974 and 2004, tax expenditures reported by Treasury more than doubled in overall number from 67 to 146, and while some were dropped, considerably more were added. For 1974, Treasury listed 67 separate exclusions, exemptions, deductions, credits, preferential tax rates, and deferral of tax liability as tax expenditures. In 1986, Treasury reported 115 tax expenditures, and by 2004 Treasury’s list grew to 146 tax expenditures. Figure 2 shows the rise of the overall number of tax expenditures over the last three decades. (App. IV contains a compilation of all tax expenditures reported by Treasury between 1974 and 2004.) Of the 146 tax expenditures listed by Treasury in the President’s fiscal year 2006 budget, 32 percent were on the first list in 1974, 23 percent were added between 1975 and 1986, and 45 percent were added since 1986. Figure 3 shows the duration of tax expenditures listed by Treasury. Of the 67 tax expenditures listed in 1974, 21 had been dropped over the period, leaving 46 remaining on the list in 2004. Since 1974, 143 tax expenditures were added to Treasury’s list, although 43 of them have since dropped from the list over the period. Of the 100 added since 1974 and still reported in fiscal year 2004, 66 were first reported for 1986 or later. The number of tax expenditures reported by Treasury has changed over time for several reasons. Some provisions expired or were repealed; others were merged with another tax expenditure. For example, until expiration on December 31, 1984, state and local governments were allowed to issue tax-exempt obligations to finance the purchase of mass-commuting vehicles for lease to government transit agencies; the Tax Reform Act of 1986 repealed the investment tax credit; and the tax expenditure that provided 5-year amortization for pollution control was merged into the investment tax credit by the Tax Reform Act of 1976. Legislation also added new tax expenditures over time, such as the child tax credit created by the Taxpayer Relief Act of 1997. Some tax expenditures split into additional listings to reflect legislation expanding existing tax expenditures. For example, Treasury began listing the net exclusion of pension contributions and earnings with separate estimates for employer- sponsored defined-benefit and 401(k) pension plans following 2001 legislation increasing the contribution limits for 401(k) accounts. Finally, changes in the baseline used by Treasury to identify tax expenditures may have caused some tax expenditures to drop off its list, while adding new tax expenditure listings. For example, Treasury briefly dropped the exclusion of scholarship and fellowship income from its fiscal year 1982 list because it was not considered a tax expenditure under the baseline that Treasury used that year. As the overall number reported by Treasury doubled, the sum of the estimated revenue loss due to tax expenditures, adjusted for inflation, tripled from approximately $243 billion for 1974 to $728 billion for 2004. Figure 4 shows the sum of Treasury’s revenue loss estimates over the past three decades. From 1974 to 1986, revenue losses increased by nearly two and one-half times from approximately $243 billion for 1974 to $598 billion for 1986 (in 2004 dollars). Over the next 2 years, the sum of the revenue losses decreased by about 28 percent to approximately $433 billion for 1988. From 1989 through 1997, however, revenue losses increased by approximately 16 percent to approximately $547 billion. From 1998 to 2002, the sum of the estimated revenue loss increased by an average of about $41 billion per year, peaking at about $783 billion for 2002. The sum of the revenue loss estimates declined to approximately $728 billion in 2004. The revenue loss estimates do not reflect the outlays for the refundable portion for certain tax credits. Summing these outlays along with the sum of the revenue loss estimates provides a more complete picture of the aggregate cost of tax expenditures throughout the period, as shown in figure 5. The sum of the estimated revenue losses and outlays associated with tax expenditures totaled about $770 billion for fiscal year 2004. Trends in the sum of tax expenditures are due, at least in part, to legislation affecting the number or scope of tax expenditures or modifying tax rates or other basic structural features of the tax code. During this period, tax legislation directly influenced the sum of tax expenditure estimates by repealing or limiting some tax expenditures, enacting new ones, and extending the life of expiring tax expenditures. Even without changes to tax expenditures, legislation affecting tax rates or the tax structure affects the sum of the tax expenditure estimates. When a taxpayer uses a tax expenditure, his or her effective tax rate is reduced, because some part of his or her income remains untaxed or is taxed at a lower rate. When statutory rates increase, a taxpayer’s ability to avoid tax on a portion of income is worth more; consequently, tax expenditures are worth more. Likewise, when rates decrease, tax expenditures are worth relatively less. Figure 6 highlights tax legislation enacted since 1974 that likely influenced the aggregate revenue losses due to tax expenditures. The sum of estimated revenue losses declined following the Tax Reform Act of 1986, primarily because of individual and corporate marginal tax rate reductions which indirectly scaled back the value of all but a few tax credits. The 1986 act, which created the last major tax reform, also eliminated or limited the scope of various tax expenditures directly, for example, by repealing the investment tax credit, phasing out the interest deduction for consumer credit over 5 years, and limiting the expensing of the intangible drilling costs for oil and gas to successful, domestic wells. While materially reducing the number and scope of tax expenditures broadened the tax base, the act resulted in no net change in federal revenue because of the lower tax rates. In contrast, the sum of estimated revenue losses increased following the Omnibus Budget Reconciliation Act of 1993, which directly increased several tax expenditures—for example, extending the EITC to single workers with no children earning $9,000 or less—and indirectly increased the value of other tax expenditures by increasing the top individual income tax rates and adding a third rate. The sum of estimated revenue losses accelerated following the Taxpayer Relief Act of 1997, which expanded several tax expenditures—for example, increasing eligibility for traditional individual retirement accounts—and created an assortment of new tax expenditures, including the child tax credit and postsecondary education tax incentives. The Economic Growth and Tax Relief Reconciliation Act of 2001 reduced tax rates again and also increased the individual AMT exemption. The influence on the aggregate trend is less apparent for legislation expanding or adding tax expenditures while also reducing tax rates. Changes in economic conditions and in the baseline tax system can also affect revenue loss estimates for tax expenditures, making them differ from year to year. For example, rising housing prices may cause the estimated cost of the mortgage interest deduction to increase as homeowners finance larger mortgages or take out equity with home equity loans. In addition, changes in tax expenditure baselines could also cause estimates to differ from year to year. For example, for fiscal years 2003 and 2004, Treasury redefined accelerated depreciation tax expenditures so that they are calculated relative to a replacement cost basis baseline rather than the historic cost basis previously used. This redefinition had the effect of reducing the estimated size of the accelerated depreciation tax expenditures. The sum of estimated revenue losses due to tax expenditures for individual income taxpayers accounted for substantially more of the revenue loss between 1974 and 2004 than corporate tax expenditures, as shown in figure 7. The sum of revenue loss estimates for tax expenditures that arise under the individual income tax increased from approximately $187 billion for 1974 to $487 billion for 1987 (in 2004 dollars). After decreasing to approximately $363 billion for 1988, the sum gradually increased to a high of approximately $688 billion for 2002 and then declined in 2003 and 2004. On average over the entire period, revenue loss estimates for individual income taxpayers accounted for about 83 percent of the sum of revenue loss estimates per year. While estimated revenue losses for all tax expenditures tripled, the sum of revenue loss estimates for corporate tax expenditures increased from approximately $57 billion for fiscal year 1974 to a high of about $116 billion in 1984 (in 2004 dollars). After 1984, the sum dropped back to approximately $57 billion in 1992 and increased slightly over the rest of the period, with some fluctuation between years. In 2004, revenue loss estimates for tax expenditures that arise under the corporate income tax accounted for 11 percent of the sum of revenue losses due to all tax expenditures. At about 10 percent of total federal receipts, corporate income taxes also accounted for a smaller share than individual income taxes. The sum of revenue loss estimates due to individual income tax expenditures is primarily attributable to a small number of large tax expenditures. The fourteen tax expenditures listed in table 2—each with an annual revenue loss estimated at $20 billion or more—accounted for about 75 percent of the sum of revenue losses for fiscal year 2004. Ten of the 14 largest tax expenditures focused entirely on individual taxpayers, and the other 4 were available for individuals and corporations. Most of the largest tax expenditures are long-standing ones, and only 2 of the 14 were added to the tax code since 1986. The child tax credit, enacted in 1997, is among the largest tax expenditures based on its estimated revenue losses alone, not counting associated outlays of $8.9 billion in fiscal year 2004. With revenue losses estimated at $4.9 billion, the EITC does not appear on this list; if $33.1 billion in associated outlays were included, this refundable credit ranks among the largest tax expenditures. Tax expenditure revenue loss estimates reflect federal income tax revenue forgone and do not account for provisions that exclude certain earnings from payroll taxes. For example, the income tax exclusion for health care not only permits the value of health insurance premiums to be excluded from the calculation of employees’ taxable earnings for income taxes but also excludes the value of the premiums from the calculation of Social Security and Medicare payroll taxes for both employees and employers. Some researchers have estimated that these payroll tax revenue losses amount to more than half of the income tax revenue losses. If payroll tax revenue losses were 50 percent of the $102.3 billion in income tax revenue loss estimated by Treasury, the combined revenue loss associated with the exclusion of employer contributions for health insurance premiums would be $153.5 billion in 2004. The sum of tax expenditure outlay-equivalent estimates exceeded the amount of discretionary spending for most years during the last decade, as shown in figure 8. Outlay-equivalent estimates, introduced by Treasury in 1981, allow the value of a tax expenditure to be compared with a direct federal outlay. The sum of the outlay-equivalent estimates reported by Treasury was approximately $853 billion in 2004. Until 1987, the sum of outlay-equivalent estimates for tax expenditures was roughly the same magnitude as discretionary spending. From 1988 through 1995, the sum of tax expenditure outlay-equivalent estimates averaged about $104 billion (in 2004 dollars) less than annual discretionary spending. Beginning in 1996, the sum of tax expenditure outlay-equivalent estimates surpassed discretionary spending and averaged about $114 billion (in 2004 dollars) more than annual discretionary spending through 2003. However, in 2003, the sum of Treasury’s tax expenditure estimates declined markedly, and the sum of tax expenditure outlays fell below discretionary spending in fiscal year 2004. This decline may be due, at least in part, to changes in the way Treasury defined and measured several tax expenditures in these years. Just as the sum of tax expenditure outlay-equivalent estimates increased since the late 1990s, discretionary spending also increased over this period. Between 1996 and 2002, the sum of tax expenditure estimates increased by an average of approximately $46 billion annually, while discretionary spending increased by an average of $21 billion annually (in 2004 dollars). Mandatory spending—larger than the sum of tax expenditure estimates or discretionary spending—consistently rose over the period shown by an average of $43 billion annually (in 2004 dollars). Figure 9 compares tax expenditures and federal outlays as a share of GDP as a way to measure the amount of federal spending through the tax code and other programs relative to the economy. As a share of the U.S. economy, the sum of tax expenditure outlay-equivalent estimates peaked at 10.9 percent of GDP in 1986. Since 1988, the sum of tax expenditure outlays has remained relatively stable at about 7.5 percent of GDP. Over the period shown, mandatory spending also was fairly constant as a share of the economy, at an average of 12.7 percent of GDP. As a share of the economy, discretionary spending declined from 10.1 percent of GDP in 1981 to 6.3 percent in 1999 and 2000, with some fluctuation between the years. In recent years, discretionary spending has grown faster than the economy, increasing to 7.8 percent of GDP in fiscal year 2004. Averaging about 18.0 percent of GDP in the 1980s through the early 1990s, federal receipts steadily rose to 20.9 percent of GDP in 2000 and since declined to 16.3 percent of GDP in fiscal year 2004. With total federal outlays— including mandatory and discretionary spending plus net interest— reaching 19.9 percent of GDP, the federal unified budget deficit amounted to 3.6 percent of GDP ($412 billion) in fiscal year 2004. The on-budget deficit in fiscal year 2004 amounted to 4.9 percent of GDP ($567 billion). Tax expenditures span almost all federal mission areas, but their relative size differs across budget functions. To gauge the relative role of tax expenditures, the sum of tax expenditure outlay-equivalent estimates and federal outlays can be compared to total spending by budget function. For 2004, Treasury reported tax expenditures for 16 of 20 budget functions. Five of the functions accounted for 91 percent of the sum of the tax expenditure outlay-equivalent estimated dollar amounts in 2004— commerce and housing credit; education, training, employment and social services; income security; health; and general government, as shown in figure 10. (See app. III for a list of tax expenditures reported for 2004 by budget function.) For the most part, these same five budget functions accounted for the largest percentage of total outlay-equivalent estimates over time, although the relative size of the estimated outlay-equivalent dollar amounts for the five budget functions varied somewhat over the period shown. For example, the health and the education, training, employment and social services budget functions more than doubled between 1986 and 2002 (in 2004 dollars). The sum of the tax expenditure outlay-equivalent estimates was greater than what the federal government spends in discretionary and mandatory spending for some budget functions. As shown in figure 11, the sum of the tax expenditure outlay-equivalent estimates exceeded federal outlays for three budget functions: energy, commerce and housing credit, and general government. Outlay-equivalent estimates for tax expenditures in the commerce and housing credit budget function totaled $300 billion for 2004, while budget outlays for that function totaled $5 billion. Seven of the 14 largest tax expenditures, listed in table 2 with revenue losses exceeding $20 billion in 2004, were reported under the commerce and housing credit budget function. The mortgage interest deduction—the second largest single tax expenditure in fiscal year 2004—had an outlay- equivalent estimate of $61.5 billion, compared to $45 billion in outlays for the Department of Housing and Urban Development, which is responsible for, among other things, mortgage credit and housing assistance programs. Various tax expenditures for accelerated depreciation and capital gains listed under the commerce and housing credit budget function also provide incentives for a wide range of different investments that can affect other federal mission areas. The general government budget function included two of the largest tax expenditures—the deduction of state and local income and sales tax, and the exclusion of interest on public purpose state and local bonds—which together accounted for about $71.5 billion in tax expenditures outlays. As figure 11 shows, the sum of outlay-equivalent estimates for tax expenditures was nearly the same magnitude as outlays in two budget functions: international affairs and education, training, employment, and social services. Within the education, training, employment, and social services budget function, the sum of outlay-equivalent estimates of the tax expenditures represented 49 percent of the total federal support. This budget function includes two of the largest tax expenditures—the child tax credit and charitable contributions other than for health. The sum of the outlay-equivalent estimates for tax expenditures was substantially less than total outlays in the health and income security budget functions. The income tax exclusion for employer-provided health care—the largest single tax expenditure—accounted for 12 percent of the sum of tax expenditure outlay-equivalent estimates and represented about 27 percent of total federal support in the health function, which includes Medicaid. Outlays in the income security function include mandatory outlays refunded under the EITC and child tax credit. No tax expenditures are reported by Treasury for two budget functions: administration of justice and Medicare. Although tax expenditures represent a substantial federal commitment of resources, little progress has been made in the Executive Branch to increase the transparency of and accountability for tax expenditures. The entire set of tools the federal government can use to address national objectives—including discretionary and mandatory spending, tax provisions, loans and loan guarantees—should be subject to periodic reviews and reexamination to ensure that they are achieving their intended purposes and designed in the most efficient and effective manner. The nation’s current and projected fiscal imbalance provides an additional impetus for engaging in such a review and reassessment. Tax expenditures may not always be efficient, effective, or equitable, and consequently, information on these attributes can help policymakers make more informed decisions as they adapt current policies in light of our fiscal challenges and other overarching trends. In addition, some tax expenditures, at least as currently designed, may serve to exacerbate other key private sector and public policy challenges (e.g., controlling health care costs). To review tax expenditures, information is needed to assess economic efficiency, effectiveness, distributional equity, and administration and compliance costs, although data and methodological challenges may impede studies of some tax expenditures. Over the past decade, the Executive Branch made little progress to integrate tax expenditures in the budget presentation and review processes that apply to spending programs, as we recommended in 1994. Simply put, our nation’s fiscal policy is on an unsustainable course. Long- term simulations by GAO, the Congressional Budget Office (CBO), and others show that over the long term we face large, escalating, and persistent deficits due primarily to known demographic trends and rising health care costs. This unsustainable fiscal path will gradually erode the nation’s economy and increasingly constrain the federal government’s capacity to address emerging challenges and opportunities. The long-term fiscal challenge is too big to be solved by economic growth alone or by making modest changes to existing spending and tax policies, including tax expenditures. In addition, the long-term fiscal challenge makes it all the more important to ensure all major federal spending and tax programs and policies—including tax expenditures—are efficient, effective, and relevant. The revenues forgone through tax expenditures either reduce resources available to fund other federal activities or require higher tax rates to raise a given amount of revenue. Our long-term simulations illustrate the magnitude of fiscal challenges we will face in the future. Figures 12 and 13 present these simulations under two different sets of assumptions. In figure 12, we begin with CBO’s August 2005 baseline—constructed according to the statutory requirements for that baseline. Consistent with these requirements, this simulation assumes that discretionary spending grows with inflation for the first 10 years, and that tax cuts which are currently scheduled to expire will expire. After 2015, discretionary spending is assumed to grow with the economy, and revenue is held constant as a share of GDP at the 2015 level. In figure 13, only two assumptions are changed: (1) discretionary spending is assumed to grow with the economy rather than merely with inflation for the entire period (not just after 2015), and (2) all tax cuts which are currently scheduled to expire are made permanent. For both simulations, Social Security and Medicare spending is based on the 2005 Trustees’ intermediate projections, and we assume that benefits continue to be paid in full after the trust funds are exhausted. Medicaid spending is based on CBO’s December 2003 long-term projections under mid-range assumptions. Both of these simulations illustrate that, absent policy changes on the spending or revenue side of the budget, the growth in federal retirement and health entitlements will encumber an escalating share of the government’s resources. Indeed, when we assume that recent tax reductions are made permanent and discretionary spending keeps pace with the economy, our long-term simulations suggest that by 2040 federal revenue may be adequate to pay little more than interest on the federal debt. Neither slowing the growth in discretionary spending nor allowing the tax provisions to expire—nor both combined—would eliminate the imbalance. Although revenues will be part of the debate about our fiscal future, making no changes to Social Security, Medicare, Medicaid, and other drivers of the long-term fiscal gap would require at least a doubling of federal taxes in the future and that seems both unrealistic and inappropriate. Accordingly, substantive reform of Social Security and the major health programs remains critical to recapturing our fiscal flexibility. While Social Security and Medicare dominate the long-term outlook, they are not the only federal programs or activities that bind the future. The federal government undertakes a wide range of programs, responsibilities, and activities that may explicitly or implicitly expose it to future spending. These “fiscal exposures” range from explicit liabilities, such as environmental cleanup and disposal, to the implicit promises embedded in current policy or public expectations, such as assistance following a major disaster. Policymakers may benefit from a better understanding of the long-term costs of decisions when they are made. For large and significant spending programs and tax provisions, consideration of estimates of present values for the long-term commitments implied could facilitate analysis and decisionmaking. While the fiscal exposure concept focuses only on items that may expose the government to future spending, some new or existing tax expenditures may have uncertain or accelerating future growth paths with long-term implications. These would need to be considered concurrently with long-term spending exposures in addressing long-term fiscal sustainability. Confronting the nation’s fiscal challenge will require a fundamental reexamination and reprioritization of the entire set of tools the federal government can use to address national objectives, including major spending and tax policies and programs. To effectively respond to social, economic, and security changes and challenges emerging in the 21st century, the federal government cannot accept what it does, how it does it, who does it, and how it is financed as “givens.” To assist Congress in reexamining the base of government, we issued a report that provides examples of the kinds of difficult choices the nation faces with regard to discretionary spending; mandatory spending, including entitlements; as well as tax policies and compliance activities. The tax policies and programs financing the federal budget can be reviewed with an eye toward the overall level of revenue needed to fund federal operations and commitments, the mix of taxes that should be used, and the extent to which the tax code is being used to promote certain societal objectives. Some tax expenditures may not always be efficient, effective, or equitable, and consequently, information on these attributes can help policymakers make more informed decisions as they adapt current policies in light of our fiscal challenges and other overarching trends. Periodic reviews of tax expenditures could help to establish whether these programs are relevant to today’s needs; if so, how well tax expenditures have worked to achieve their objectives; and whether the benefits from particular tax expenditures are greater than their costs. To measure benefits and costs, information is needed concerning their effects on economic efficiency, effectiveness, distributional equity, and administration and compliance costs. To the extent that periodic reviews show that specific tax expenditures are not effective, efficient, or equitable, those tax expenditures might be eliminated or redesigned, perhaps at a lower cost in revenue forgone. Coordinated reviews of tax expenditures with related federal spending programs could assess the relationships and interactions of programs within similar mission areas and identify which strategies are effective. Policymakers could use such evaluations to reduce overlap and inconsistencies and direct scarce resources to the most effective or least costly methods to deliver federal support. Tax expenditures, if well designed and effectively implemented, can be an effective tool and appropriate to further some federal goals and objectives. For those activities that merit a subsidy (where too little of the activity would otherwise be undertaken), subsidies through the tax code are one option. For example, a tax expenditure for medical insurance would improve economic efficiency if, absent a subsidy, too few workers would purchase insurance and the tax expenditure encouraged workers to insure in a cost-effective manner. Because the benefits from research may not fully accrue to the firms that bear the costs of research, a tax expenditure aimed at spurring private-sector investment in research and development may be an appropriate response assuming it stimulates additional research whose benefits exceed the social costs associated with the forgone revenues. However, studies we and others have done raise concerns about the efficiency, effectiveness, or equity of some tax expenditures and about how tax expenditures relate to other federal activities aimed at the same mission area. While tax expenditures may be intended to improve economic efficiency, poor targeting or design may introduce additional economic inefficiencies. For example, the income tax exclusion of employer-paid health insurance premiums, by shifting a portion of the costs to all taxpayers, reduces the after-tax cost of insurance for the beneficiary. The income tax exclusion is credited with increasing health care coverage for employees, and the risk pooling under group health insurance generally allows employees to obtain insurance at lower costs than in the individual insurance market. However, this tax benefit also leads people to obtain more coverage than they would otherwise and increases the demand for health care by enabling those insured to obtain services at discounted prices. Some researchers believe that the unlimited availability of the exclusion for employer-provided health insurance has led to excessive use of health care services, which, in turn, has helped to drive up health care prices faster than the overall price level. Capping the exclusion at the average premium cost has been suggested as one option to improve the economic efficiency of this tax expenditure and reduce the associated revenue loss; another option suggested is replacing the tax exclusion with a tax credit to improve equity since the tax savings per dollar of premium would be the same for all taxpayers. In another example, the mortgage interest deduction encourages home ownership by lowering the costs of borrowing for taxpayers who itemize their deductions. However, by doing so, the deduction encourages households to invest more in housing and less in other assets that might contribute more to the nation’s productivity and economic capacity. According to CBO’s Budget Options, limiting the deductibility of interest to $500,000 of mortgage debt might still provide taxpayers with a sizable incentive to become homeowners and could boost investment in businesses and education. Tax expenditures may not be an effective way to achieve federal goals if targeting them to entities or activities meant to receive the benefits is difficult, if they subsidize activities that would have been undertaken without their stimulus, or if they serve to exacerbate other key private sector and public policy challenges. For example, the income tax exclusion of employer-paid health insurance premiums reduces the after-tax cost of insurance for the beneficiary. However, the exclusion offers no benefit to workers whose employers do not offer health benefits or who purchase their own insurance. Further, this tax benefit also leads people to obtain more comprehensive coverage than they would otherwise and could increase the demand for health care to the extent that it shields those insured from the full costs of health care, complicating efforts to moderate health care spending. The exclusion also tends to favor higher-income workers more likely to have employer- sponsored coverage. In another example, individual retirement accounts (IRAs) also receive preferential tax treatment with $7.5 billion in estimated revenue losses in fiscal year 2004. Contributions may be tax-deductible depending on the IRA type, and earnings generally are not taxable until distribution and not taxable at all in some cases. Although the tax benefits indeed seem to encourage individuals to contribute to these kinds of accounts, the amounts contributed may not be totally new saving. Some contributions may represent amounts that would have occurred without the tax incentives or amounts shifted from taxable assets or financed by borrowing. In a 1996 symposium examining universal deductible IRAs available in the early 1980s, researchers reached three widely divergent conclusions: (1) yes, most contributions represented new saving, (2) no, most IRAs contributions were not new saving; and (3) maybe, about 26 cents of each dollar contributed may have represented new saving. More recent research examining the universal IRA experience estimated that at most 9 cents of each dollar contributed represented new saving. Since 1986, Congress has restricted IRA eligibility for higher-income taxpayers and increased the contribution limits, and the overall effect of IRAs on personal saving remains subject to considerable debate. Although tax expenditures, by design, result in individuals with similar incomes and expenses paying differing amounts of tax depending on whether they engage in tax-subsidized activities, tax expenditures still may raise equity concerns. Some tax expenditures benefit mainly upper- income taxpayers because they are most likely to itemize and because the value of tax expenditures is generally greatest for those in higher tax brackets. Tax expenditures also can contribute to mission fragmentation and program overlap, and this, in turn, creates the potential for duplication and service gaps. Though sometimes necessary to meet federal priorities, mission fragmentation and program overlap can create an environment in which programs do not serve participants as efficiently and effectively as possible. Like spending programs, tax expenditures may reduce government effectiveness to the extent that they duplicate or interfere with other federal programs. For example, in the higher education mission area, the federal government helps students and families save and pay for the costs of postsecondary education through tax expenditures and longer- standing federal financial aid programs, consisting of grants, loans, and work-study income. Since the 1990s, the federal government has offered multiple tax incentives to help families pay for post-secondary education, including the nonrefundable Lifetime Learning and HOPE tax credits, deductions for qualifying post-secondary expenses and interest on student loans, and two tax-preferred ways to save for future education expenses. The tax-preferred saving vehicles interact with the traditional federal aid system and can affect the net federal assistance received. Further, some tax filers do not appear to make the most effective use of certain education- related tax incentives, and we have found that some people who appear eligible for the tuition deduction and/or the tax credits did not claim them. One reason may be that the differing income phaseouts and interactions among the tax credits and deductions are difficult for taxpayers to understand; CBO, JCT, IRS’s National Taxpayer Advocate, Treasury, and others have suggested ways to consolidate the education tax credits and deductions. Others have also questioned the efficiency, effectiveness, and equity of other tax expenditures and suggested ways to design and better target specific provisions. In December 2004, the IRS National Taxpayer Advocate designated the complexity of the Internal Revenue Code, including the complexity of reporting requirements related to tax expenditures, as the most serious problem facing taxpayers and the IRS. The IRS National Taxpayer Advocate also recommended consolidating the various types of retirement saving vehicles and creating uniform rules regarding early withdrawals, plan loans, and portability. In its January 2005 report to the Senate Finance Committee, JCT staff presented various options to improve tax compliance and reform tax expenditures. Options include repealing some tax expenditures and restructuring others to simplify the law or achieve the intended purpose in a more fair or efficient way. In its February 2005 budget options compendium prepared for the House and Senate Budget Committees, CBO listed several options to eliminate or restructure tax expenditures. Options include further limiting the tax benefit of itemized deductions to the 15 percent rate for higher-bracket taxpayers and capping itemized deductions for state and local taxes and charitable contributions to the amount exceeding 2 percent of adjusted gross income. Finally, in December 2004 for the Senate Budget Committee, CRS updated its biennial compendium on tax expenditures. This volume includes for each tax expenditure: JCT’s revenue loss estimate, the legal authorization, a description of the tax provision, its impact including distribution of benefits when available, the rationale at the time of adoption, assessment summarizing the arguments for and against the provision, citations to relevant research. According to CRS, congressional budget decisions will take into account the full spectrum of federal programs only when tax expenditures are considered in conjunction with direct spending programs. Inadequate or missing data and difficulties in quantifying the benefits of some tax expenditures can impede studies of their efficiency, effectiveness, and equity. A key challenge is that data necessary to assess how often a tax expenditure is used and by whom generally would not be collected on tax returns unless IRS needs the information to know the correct amount of taxes owed or is legislatively mandated to collect or report the information. For example, tax exclusions—including those for employer- provided health insurance and pensions which are among the largest tax expenditures—generally are not reported on individual taxpayers’ returns. In some cases, IRS may combine reporting requirements to minimize its workload and taxpayer burden, and as a result, the information collected may not identify specific beneficiaries or activities targeted by a tax expenditure. For example: In our 2002 report on three tax expenditures meant to encourage employment of the disabled among other economically disadvantaged workers, we could not determine the amounts used to hire, retain, and accommodate workers with disabilities. We found that information on the work opportunity and disabled access credits was not available from tax data because tax returns provided only the total amount of credits reported, and employers could claim the work opportunity credit for employing other types of workers and claim the disabled access credit for expenditures made to accommodate customers with disabilities. Also, information regarding use of the barrier removal deduction for providing transportation or architectural accommodations was not available in IRS databases. As we reported in 2003, for one of the seven Liberty Zone tax benefits, the business employee credit, IRS was in the process of collecting but was not planning to report information about the number of taxpayers claiming the credit and the amount of credit claimed. IRS was also not planning to collect or report information about the use of the other six benefits, and taxpayers do not report these benefits as separate items on the existing returns. For example, taxpayers added the amount of depreciation they are allowed under the Liberty Zone special depreciation allowance benefit to other depreciation expenses and report their total depreciation expenses on their returns. IRS officials said that they do not need information on each specific benefit claimed to properly target their enforcement efforts. Further, IRS’s financial management system does not currently have cost accounting capabilities. As a result, comparisons of the costs of administering existing or proposed tax expenditures with similar administrative costs for spending programs may be impossible. Regarding taxpayer compliance costs, although IRS is working to develop improved estimates of taxpayer compliance burden, it is not yet clear whether this modeling effort will provide estimates of additional compliance costs that may result from particular tax expenditures. According to IRS officials, IRS seeks to collect information necessary to determine whether taxpayers have accurately reported their income and calculated the correct amount of tax liability. By focusing on information essential to administering the tax code, IRS aims to ensure that taxpayers are not burdened unnecessarily by record keeping and reporting, and IRS can minimize its own administrative costs for data collection and processing. For tax expenditures recorded on particular lines on tax forms, such as deductions and credits for individual taxpayers, data on the use of these tax expenditures are available. IRS Statistics of Income Division publications detail the number of individual tax returns on which taxpayers claimed each deduction or credit, the total amounts claimed, and the distribution of claims among taxpayers by income level. If policymakers conclude that additional data would facilitate reexamining a particular tax expenditure, decisions would be required on what data are needed, who should provide the data, who should collect the data, how to collect the data, what it would cost to collect the data, and whether the benefits of collecting additional data warrant the cost of doing so. Another factor to consider is how to facilitate data sharing and collaborative evaluation efforts. For example: Limited data are available on the prevalence and use of business-owned life insurance, and GAO has reported that more comprehensive data could be useful in assessing the tax-favored treatment of this investment. Data on the amount of tax-free income that businesses received from death benefits could help explain the potential effect of changes to the tax treatment of policies on tax revenues. Businesses holding the policies or insurance companies that sold them could provide this and other data. Several agencies, including Treasury and the Securities and Exchange Commission, already collect some financial information from businesses and insurers and could be tasked to collect additional data for tax policy purposes. In the higher education area, the Department of Education (Education) is unable to analyze the use of higher education tax credits or their effects because it lacks access to individual taxpayer data needed to identify users of the credits. Treasury has access to taxpayer data but has not used these data for evaluating the education tax credits since their implementation in 1998. In 2002, GAO recommended that Education and Treasury collaborate in studying the impact of tax credits and student aid programs on postsecondary attendance, choice, completion, and costs. A key first step would be identifying opportunities for, and limits to, data sharing and develop a plan to address data needs, but little action has been taken. In the case of the empowerment zone, enterprise community, and renewal community programs, the lack of tax benefit data limits the ability of the Department of Housing and Urban Development (HUD) and the Department of Agriculture (USDA) to administer and evaluate the overall programs. We recommended that HUD, USDA, and IRS collaborate to (1) identify the data needed to assess the use of the tax benefits and the various means of collecting such data; (2) determine the cost-effectiveness of collecting these data, including the potential impact on taxpayers and other program participants; (3) document the findings of their analysis; and, if necessary, (4) seek the authority to collect the data, if a cost-effective means is available. When data on the cost and use of tax expenditures are available or can be reasonably estimated and other relevant data are available, economic analysis can be useful in evaluating whether a tax expenditure is efficient, effective, or equitable. Econometric modeling analysis can estimate how a tax expenditure affects the prices and quantities of targeted goods and services and determine how taxpayers’ incomes are affected. Although isolating and quantifying the outcomes associated with tax expenditures is challenging—just as it is for spending programs, research results are useful in demonstrating how particular tax expenditures work or providing insight on ways to refine their design. For example, research has generally shown that the EITC effectively increases recipients’ participation in the labor force, particularly for single parents, and lifts millions of recipients out of poverty. Some tax expenditures are enacted on a temporary basis, specifically to provide an opportunity for evaluating their effects before they are extended. For example, the research tax credit, enacted on a temporary basis in 1981 and extended 11 times as of 2004, was substantially modified in 1989 after researchers showed the original credit formula undercut the incentive it was intended to provide to undertake additional research spending. In some cases, economic research has not yielded definitive results or was limited by data and methodological issues. For example, although the various tax expenditures aimed at encouraging saving for, among other things, retirement, education, and health care have resulted in substantial sums being placed in these tax preferred accounts, economists disagree about whether tax incentives, such as for IRAs, are effective in increasing the overall level of personal saving. In the case of the research credit, GAO reported in 1996 that studies done at that time provided mixed evidence on the amount of spending stimulated and used publicly available data that were not a suitable proxy for tax return data. To fully assess the value to society of the research tax credit, researchers need to look at more than just the amount of spending stimulated per dollar of revenue cost. Comparisons should include (1) the total benefits gained by society from research stimulated by the credit and (2) the estimated costs to society resulting from the collection of taxes required to fund the credit. The social benefits of the research conducted by individual companies include any new products, productivity increases, or cost reductions that benefit other companies and consumers throughout the economy. Although most economists agree that research spending can generate social benefits, the effects of the research on other companies and consumers are difficult to measure. Ultimately, evaluation results could be used to identify how well tax expenditures are working, to both identify ways to better manage individual tax expenditures and decide how best to ensure prudent stewardship of taxpayers’ resources. Whether in time of deficit or surplus, reexamining both the spending and tax sides of the budget is essential to ensure the reasonableness, relevancy, and sustainability of existing programs and position the nation for the future. In the case of the EITC, Treasury and IRS are using evaluation results to identify ways of reducing erroneous claims, while maintaining participation among eligible claimants and minimizing taxpayer and IRS’s administrative burden. Additional evaluations of other tax expenditures may identify opportunities to retarget or eliminate ineffective or outdated tax expenditures. Tax expenditures, unless well designed to correct market failures, can distort economic decisions in ways that reduce economic performance from what it otherwise could be and thereby lower our future economic well-being. If a tax expenditure or group of tax expenditures is reduced or eliminated, any resulting increase in tax revenues could be offset if policymakers deem that to be appropriate fiscal policy. In any event, in order to raise a given amount of federal revenue, tax rates must be raised higher than they otherwise need to be due to revenue losses from tax expenditures. Thus, the net change after tax rate adjustments could, depending on overall congressional priorities and preferences, result in tax reductions for many taxpayers in place of the preferential treatment for some taxpayers. According to a recent estimate, a broad-based income tax system— eliminating basically all credits, deductions, special rates, exclusions for employer-provided fringe benefits and employee contributions to retirement account as well as eliminating the AMT—-could raise about the same amount of revenue as the current income tax system while lowering tax rates by about one-third. Although OMB and Treasury in 1994 supported expanding federal reviews of tax expenditures, the Executive Branch made little progress over the past decade to integrate tax expenditures in the budget presentation and to incorporate tax expenditures under review processes that apply to spending programs, as we recommended in 1994. Even though the sum of tax expenditure outlay-equivalent estimates is about the same magnitude as discretionary spending overall and greater than outlays in some budget functions, this is not readily visible to policymakers and the public because tax expenditures are not integrated in the budget presentation. Since their initial efforts to outline a framework for evaluating tax expenditures and preliminary performance measures, OMB and Treasury have largely ceased to make progress and have retreated from setting a schedule for evaluating tax expenditures. One of the key impediments to moving forward in conducting reviews of tax expenditures’ performance is the continuing lack of clarity about the roles of OMB, Treasury, IRS, and departments or agencies with outlay program responsibilities. So far, GPRA plans and reports are underutilized as a way to provide more information about the performance of tax expenditures and their contributions relative to spending programs. Tax expenditures are not subject to annual budget reviews, and OMB has not generally subjected them to scrutiny under PART in tandem with spending programs sharing common, crosscutting goals. Integrating tax expenditure costs in the annual budget presentation is crucial to providing a comprehensive picture of federal resources to facilitate reexamining the base. As a start in acting on our 1994 recommendation, OMB began presenting revenue loss sums for tax expenditures alongside outlays and credit activity for each budget function in the fiscal year 1998 budget. These summary tables were a useful starting point in highlighting the relative magnitude of tax expenditures across mission areas. However, OMB discontinued the reporting practice after the fiscal year 2002 budget, and instead, the Analytical Perspectives contains Treasury’s list of tax expenditures with associated revenue loss estimates for each one. Isolating tax expenditure cost information in a supplemental volume, however, provides a less comprehensive picture for policymakers and the public to compare all of the policy tools used within a mission area, such as health care or energy, because all the tools are not displayed together in the budget. OMB has demonstrated that it is feasible to display tax expenditure totals alongside spending programs in each budget function. Such a display is a first step in providing the public and policymakers with a more useful and accurate picture of the extent of federal support and activities. GAO also recommended in 1994 that the budget presentation include, to the extent possible, information to highlight for policymakers and the public the effectiveness, distributional equity, and economic efficiency for all federal resources allocated in a mission area. In the tax expenditure chapter in Analytical Perspectives, OMB added a section outlining possible performance measures developed by Treasury, which could be used to present information about the performance of tax expenditures. Although this overview was initially introduced in the 1997 budget and expanded in the 1999 budget, no performance information is actually displayed. OMB states that the measure examples provided are “illustrative” in nature, acknowledges that the performance measure discussion “although broad, is nonetheless incomplete,” and noted that many tax expenditures are not explicitly cited. The Chief Financial Officers Act, as expanded by the Government Management Reform Act of 1994, required federal agencies to prepare annual audited financial statements beginning in fiscal year 1996. OMB Circular A–136 Financial Reporting Requirements requires agencies to combine the annual GPRA program performance report with the financial statements and other information in a combined performance and accountability report. In accordance with generally accepted accounting principles, the basis on which federal agencies are required to prepare their financial statements, tax expenditures may be presented as other accompanying information. The Federal Accounting Standards Advisory Board (FASAB), which promulgates federal accounting standards, recognized that tax expenditures, which can be large in relation to spending programs that are measured under federal accounting standards, may not be fully considered in entity reporting. FASAB based its views, in part, on the fact that, in some cases, the association of tax expenditures with particular programs is not clear and the information is available elsewhere. The Board agreed to permit reporting entities to present, as other accompanying information, information on tax expenditures that the reporting entity considers relevant to its programs, if suitable explanations and qualifications are provided. As a result, tax expenditure amounts, which in some cases are larger than similar spending programs, are not required to be disclosed to the public as part of federal agencies’ financial statements nor are they disclosed in the consolidated financial statements of the federal government. Similarly, OMB’s guidance for the performance and accountability reports does not require reporting of tax expenditure information in agencies’ reports. Reporting such information would ensure greater transparency of and accountability for tax expenditures. OMB has not designed and implemented a structure for conducting reviews of tax expenditures’ performance, as we recommended in 1994. Our recommendation was consistent with language in the Senate Committee on Government Affairs’ Report on GPRA, which specified that the Director of OMB was to establish an appropriate framework for periodic analyses of the effects of tax expenditures in achieving performance goals. To significantly increase the oversight and analysis of tax expenditures, the committee report also called for a schedule for periodic tax expenditure evaluations. The ultimate goal of designing a structure for conducting performance reviews of tax expenditures was to begin developing and presenting performance information in the federal budget that would help demonstrate the relative effectiveness, efficiency, and equity of federal outlays and tax expenditure efforts within a mission area. In our 1994 report, we emphasized that in designing the structure for tax expenditure performance reviews, OMB should consider the roles of OMB, Treasury, and departments or agencies with outlay program responsibilities in assessing the performance of tax expenditures and their relationship and interaction with related spending programs; and which tax expenditures and outlay programs are related or interact and should be jointly considered. GAO recommended that OMB and Treasury conduct case studies of the proposed review structure to identify (1) successful methods agencies devise for reviewing tax expenditures’ performance, (2) how best to report the results of these reviews, and (3) how to ensure that adequate resources are available for such reviews. Although OMB, working with Treasury, took a number of steps consistent with our recommendation, it has not resolved the roles of OMB, Treasury, and departments or agencies with outlay program responsibilities; established a schedule for reviewing tax expenditures; or addressed lessons learned from tax expenditure case study reviews that Treasury performed. If the Executive Branch cannot define roles and set firm plans, it will continue to face additional challenges in developing objective, measurable, and quantifiable performance measures for tax expenditures that support federal missions and goals. Defining roles of agencies. One of the key impediments to moving forward in conducting reviews of tax expenditures’ performance is the continuing lack of clarity about the roles of OMB, Treasury, IRS, and departments or agencies with outlay program responsibilities. According to officials at OMB, it is difficult to determine which agencies in addition to Treasury and IRS have jurisdiction over particular tax expenditures. For example, one OMB official noted that tax expenditures meant to encourage savings were not the purview of any single agency. OMB officials also stated that OMB does not have the expertise or resources to conduct its own comprehensive analyses of tax expenditures, so individual agencies should take responsibility for identifying tax expenditures that affect their missions, with Treasury’s Office of Tax Analysis leading efforts to evaluate tax expenditures. Without clarification on the roles of federal agencies, inaction, overlap or inconsistency in evaluating tax expenditures can occur. For example, in 2002 we reported that gaps existed in monitoring the relative effectiveness of Title IV grants and loans and the HOPE and Lifetime Learning tax credits in promoting postsecondary education. The lack of collaboration between the Department of Education and the Treasury left little information available to help Congress weigh the relative effectiveness of grants, loans, and tax credits. Although data and methodological challenges make it difficult to isolate the impact of these tools, some academic researchers have used statistical techniques and research designs to mitigate these challenges. We recommended in 2002 that the departments develop a plan to share data and collaborate to provide Congress with evidence about the impact of higher education tax credits and student aid, but little action has been taken to implement the recommendation. To define the roles of federal agencies in reviewing tax expenditures, OMB, working with Treasury and other federal agencies, will need to exercise judgment in resolving how to address tax expenditures spanning mission areas. In some cases, Treasury could take the lead, such as in evaluating tax expenditures that broadly support investment and saving, or other agencies could work with Treasury to evaluate tax expenditures that directly affect their mission areas. For example, an evaluation of the various energy supply tax expenditures might involve both Treasury and the Department of Energy in assessing their effects on increasing production as well as on energy security and the environment. Establishing a schedule for evaluations. Periodic reviews of tax expenditures are also impeded because OMB has not developed a schedule for such reviews. In its 1997 GPRA report and again in the fiscal year 1999 budget, OMB set the expectation that the Executive Branch would lay out a schedule for tax expenditure evaluations. Beyond three initial pilot studies in 1997, however, no schedule has been set for further evaluations or case studies to explore methods and resource needs for measuring and reporting tax expenditure performance. As the roles of federal agencies are clearly defined, OMB and Treasury, working with other agencies, would be positioned to establish a schedule for tax expenditure evaluations. Opportunities exist to develop a strategic approach to the selection and prioritization of areas in allocating scarce evaluation resources. In our January 2004 report on OMB’s PART, we recommended that OMB target PART assessments based on such factors as the relative priorities, costs, and risks associated with related clusters of programs and activities and that OMB select similar programs for review in the same year to facilitate comparisons and tradeoffs. Similar considerations would be useful in setting a schedule for tax expenditure evaluations. Testing the evaluation framework. Although OMB outlined an initial framework for tax expenditure analysis in its May 1997 GPRA report to the President and Congress, OMB has not taken steps to address lessons learned from tax expenditure case study reviews that Treasury performed. OMB’s framework focused on the methodology that could be used to assess the performance of tax expenditures. OMB emphasized that developing a framework that is comprehensive, accurate, and flexible to reflect the objectives and effects of the wide range of tax expenditures would be a significant challenge. The initial framework for evaluating tax expenditures was expected to follow the basic structure for performance measurement—inputs, outputs, and outcomes. For tax expenditures, the primary input is the revenue loss. The outputs are the quantitative or qualitative measures of goods and services, or changes in investment and income, produced by the tax expenditures. Outcomes, in turn, were defined as the changes in the economy, society, or environment that the tax expenditures aim to accomplish. In 1997, Treasury did three pilot evaluations of selected tax expenditures to test the evaluation methods that OMB had described in its framework for tax expenditure analysis. In addition to seeking to learn lessons about applying the framework, the pilots were also intended to help identify resource needs for evaluating tax expenditures. Treasury selected one pilot each to be done by the individual, corporate, and international units within its Office of Tax Analysis. Results from the three tax expenditure pilots—the exclusion for worker’s compensation benefits, the tax credit for non-conventional fuels, and the tax exclusion for certain amounts of income earned by Americans living abroad—were summarized alongside each tax expenditure’s description in the tax expenditure chapter of the Analytical Perspective volume of the fiscal year 1999 budget. Although OMB originally expected to complete additional evaluations to refine the tax expenditure framework and improve performance measures, no further pilot evaluations have been completed. In reporting the results of these pilots, Treasury said that much of the data needed for thorough analysis was not available and that in at least one case, it was difficult to identify a clear purpose for the tax expenditure. Treasury did not discuss the resources that would be needed to continue doing such evaluations. However, OMB officials we interviewed reiterated that the data availability issues raised in the 1997 pilots remain a major challenge, and data constraints limit the assessment of the effectiveness of many tax expenditures. To improve the data available to assess the effects of some major tax expenditures, principally those aimed at personal savings, Treasury and IRS are developing a data set that is to follow a sample of individual income taxpayers over at least 10 years, beginning with tax year 1999. The new data set aims to capture the changing demographic and economic circumstances of individual taxpayers for use in analyzing the effects of changes in tax law over time. In addition to the panel sample, OMB reported in the fiscal year 2006 budget that it is working with Treasury’s Office of Tax Analysis and other agencies to improve data available for assessment of saving-related tax expenditures. No time frame was given in the 2006 budget for when any results will be reported. The challenges in producing credible performance information and the ability of federal agencies to produce evaluations of their programs’ effectiveness are not unique to tax expenditures. As our work on GPRA and PART implementation shows, the credibility of performance data has been a long-standing weakness. Developing and reporting credible information on outcomes achieved through federal programs remains a work in progress. In past reports, we have identified several promising ways agencies can maximize their evaluation capacity. For example, careful targeting of federal evaluation resources on key policy or performance questions and leveraging federal and nonfederal resources show promise for addressing key questions about program results. Other ways agencies might leverage their current evaluation resources include adapting existing information systems to yield data on program results, drawing on the findings of a wide array of evaluations and audits, making multiple use of an evaluation’s findings, mining existing databases, and collaborating with state and local program partners to develop mutually useful performance data. Congressional expectations for reviews of tax expenditures in connection with agencies’ reviews of related outlay and other programs generally have not been met. Enacted in 1993, GPRA is designed to inform congressional and executive decisionmaking by providing objective information on the effectiveness and efficiency of federal programs and spending. GPRA requires agencies to measure performance toward the achievement of annual goals and report on their progress in annual program performance reports. Through the strategic planning requirement, GPRA requires federal agencies to consult with the Congress and key stakeholders to regularly reassess their missions and strategic goals as well as the strategies and resources they will need to achieve their goals. Although GPRA offers a promising opportunity for the Executive Branch to develop useful information about the results of tax expenditures, agencies are not using their GPRA strategic plans and annual performance plans and reports to assess tax expenditures and their performance relative to spending programs contributing to the same strategic goals and objectives. Without integrating tax expenditures that have a direct bearing on federal missions and goals, policymakers may not have complete information to fully evaluate whether the government is achieving results or how the performance of tax expenditures interact with or compare to related spending programs. The Senate Governmental Affairs Committee Report on GPRA stated that tax expenditures should be taken into consideration in a comprehensive examination of government performance. The report stated that a schedule for periodically assessing the effects of specific tax expenditures in achieving performance goals should be included in the annual performance plans and that annual performance reports would subsequently be used to report on these tax expenditure assessments. In addition, the report noted that these assessments should consider the relationship and interactions between spending programs and tax expenditures and the effects of tax expenditures in achieving federal performance goals. Although GPRA expanded the supply of performance information generated by federal agencies, evaluating crosscutting federal efforts continues to be a challenge. GPRA requires the President to include in his annual budget submission a federal government performance plan. Congress intended that this plan provide a single cohesive picture of the annual performance goals for the fiscal year. The governmentwide performance plan could help Congress and the Executive Branch address critical federal performance and management issues, including redundancy and other inefficiencies in how we do business. However, this provision has not been fully implemented, and the current agency-by-agency focus of the budget does not have a broad, integrated perspective of planned performance on governmentwide outcomes. As envisioned by Congress, the governmentwide plan could relate and address the contributions of alternative federal strategies, including tax expenditures, to governmentwide goals. Agencies’ annual performance plans and reports could highlight crosscutting program efforts and provide evidence of the coordination of those efforts. We have previously recommended that OMB fully implement GPRA’s requirement to develop a governmentwide plan to provide a more cohesive picture of the federal government’s goals and strategies. Prior to a 2003 revision, OMB’s Circular A-11 guidance on GPRA reporting stated that descriptions should be provided for use of tax expenditures in annual performance plans when achievement of program or policy goals is dependent upon these governmental actions and annual performance reports must include the results of any assessment of how specific tax expenditures affect achieving its performance goals. However, the circular also stated that few agencies were responsible for such analyses. In addition, as part of a broader A-11 revision update, OMB streamlined its GPRA guidance in 2003 and no longer describes tax expenditures as part of guidance on performance plans and performance reports in the circular. According to OMB, it is up to individual agencies to decide whether to address tax expenditures in their GPRA reports and that many agencies focus on outlay programs over which they have more direct control. OMB officials told us that some agencies see tax expenditures as closely related to what they do and some do not, or agencies might not have enough knowledge about tax expenditures to consider them carefully. Our review of selected GPRA Performance and Accountability reports indicated the acknowledgement of tax expenditures in achieving federal performance goals varied by agency. For example: The Department of Energy (DOE) and HUD both acknowledged tax expenditures or tax policy as factors that affect agency goals. However, the DOE’s fiscal year 2004 report provided no further discussion on how the tax expenditures contributed to achieving the agencies’ performance goals. HUD’s fiscal year 2004 report acknowledged the tax incentives for renewal communities, empowerment zones, and enterprise communities as helping to achieve its objective of providing capital resources to improve economic conditions in distressed communities. As discussed previously, the outlay-equivalent value for tax expenditures amounts to more than other spending in the energy as well as the commerce and housing credit mission areas. The fiscal year 2004 reports released by the Department of Commerce (Commerce), the Department of Veterans Affairs, and the Department of Health and Human Services (HHS) do not mention tax expenditures at all, even though tax expenditures exist under the different mission areas related to these departments. For instance, several large tax expenditures, such as capital gains and accelerated depreciation, are listed by Treasury as related to the Commerce mission area, but it is unclear how, if at all, these tax expenditures relate to Commerce’s performance goals. Also, the income tax exclusion for employer- provided health care, the largest single tax expenditure, clearly intersects with HHS's mission to assure access to health care. Treasury’s fiscal year 2004 report explicitly identified a few tax expenditures—the New Markets Tax Credit and a new health coverage tax credit—as related to achieving its strategic objective to stimulate U.S. economic growth. In the context of its strategic objective to improve and simplify the tax code, Treasury reported on its efforts to, among other things, simplify the EITC and consolidate the higher education tax benefits. Treasury also reported on its efforts to improve determination of EITC eligibility and educate taxpayers about this provision. Treasury did not include information about tax expenditures as other accompanying information in the financial statement in its 2004 report. Tax expenditures have not been incorporated into Executive Branch budget reviews, as we recommended in 1994. We recommended that OMB use information on outlay programs and tax expenditures to make recommendations to the President and Congress about the most effective methods for accomplishing federal objectives. We concluded that better targeting by Congress and the Executive Branch of all federal spending and subsidy programs could save resources and increase economic efficiency through (1) better coordination of spending programs with tax expenditures; (2) reduction of overlap and inconsistencies among all federal subsidy programs; and (3) encouragement of trade-offs among tax expenditures, outlays, and loans. The congressional budget process is the annual vehicle through which Congress articulates both an overall fiscal stance—overall targets for spending and revenue—and its priorities across various broad categories. The process provides the overall constraints for spending and revenue actions by Congress for each year and the rules of procedure that can be used to constrain new entitlement and tax legislation not assumed in the annual budget resolution. The conflicts and uncertainties entailed in budgeting and policymaking are often mitigated by focusing decisions on incremental changes in resources each year. As a result, this incremental process focuses disproportionate attention on proposed changes to existing programs and proposals for new programs, with the base of programs often being taken as “given.” Moreover, the process routinely examines only the one-third of federal spending subject to the annual appropriations process. Unlike discretionary spending programs, which are subject to periodic reauthorization and annual appropriation, tax expenditures—like entitlement programs—are permanent law and are generally not subject to a legislative process that would ensure systematic annual or periodic review. In addition, the budget rules that were grounded in statute—including discretionary spending caps, pay-as-you-go (PAYGO) limits on mandatory spending and tax cuts—and enforced by executive actions if violated, expired at the end of fiscal year 2002. Before their expiration, PAYGO procedures restricted Congress’ ability to add new tax expenditures or expand existing ones unless offsetting funds could be raised. Because tax provisions are not as visible in the budget as spending programs, there is an incentive for policymakers to use tax provisions rather than spending programs to accomplish programmatic ends. However, both have a negative effect on the government’s “bottom-line.” Reinstituting budget enforcement mechanisms, such as discretionary spending caps, PAYGO discipline on both the spending and tax side, and fiscal benchmarks, could help the President and Congress sort out the many claims on the federal budget, including tax expenditures. Within the Executive Branch, OMB has not used its PART process, which is central to the Executive Branch’s budget and performance integration initiative, to systematically review tax expenditures and promote joint and integrated reviews of tax and spending programs sharing common, crosscutting goals. OMB describes PART as a diagnostic tool meant to provide a consistent approach to assessing federal programs as part of the executive budget formulation process. It applies 25 questions to all “programs” under four broad topics: (1) program purpose and design, (2) strategic planning, (3) program management, and (4) program results (i.e., whether a program is meeting its long-term and annual goals) as well as additional questions that are specific to one of seven mechanisms or approaches used to deliver the program. PART is designed to be evidence-based, drawing on a wide array of information, including authorizing legislation, GPRA strategic plans and performance plans and reports, financial statements, inspectors general and GAO reports, and independent program evaluations. Drawing on available performance and evaluation information, the PART questionnaire attempts to determine the strengths and weaknesses of federal programs with a particular focus on individual program results and improving outcome measures. Since the fiscal year 2004 budget cycle, OMB has applied PART to 607 programs (about 60 percent of the federal budget), and given each program one of four overall ratings: (1) “effective,” (2) “moderately effective,” (3) “adequate,” or (4) “ineffective” based on program design, strategic planning, management, and results. A fifth rating, “results not demonstrated,” was given—independent of a program’s numerical score— if OMB decided that a program’s performance information, performance measures, or both were insufficient or inadequate. Over the next 2 years, OMB plans to assess nearly all remaining Executive Branch spending programs. Whereas OMB, through its development and use of PART, has provided agencies with a powerful incentive for improving data quality and availability on the spending side, relatively little progress has been made in evaluating the effectiveness of tax expenditures. So far, OMB has used PART to address tax expenditures in only two cases—the EITC compliance initiative and the New Markets Tax Credit (NMTC). For the EITC, which has outlays for the refundable portion, the direct federal spending PART instrument was used to evaluate IRS’ initiative to improve the payment accuracy rate for the EITC—and not the refundable EITC itself. OMB rated the compliance initiative as “ineffective” in the fiscal year 2004 budget because data showed no progress in reducing the high rates of erroneous payments. The review did not evaluate the effects of the EITC on workforce participation or examine its contribution relative to other federal programs aimed at reducing poverty. The NMTC, which is administered like a grant by CDFI, was evaluated as part of OMB’s crosscutting review of community and economic development programs. OMB rated the NMTC as “adequate” and reported in 2005 that CDFI had established meaningful long-term and annual performance measures but that data were not yet available to evaluate the effectiveness of the NMTC or establish baselines for the performance measures. We have urged a more comprehensive, consistent, and integrated approach to evaluating all programs relevant to common goals—encompassing spending, tax expenditures, and regulatory programs—using a common framework. Such an analysis is necessary to capture whether a program complements and supports other related programs, whether it is duplicative and redundant, or whether it actually works at cross-purposes to other initiatives. OMB officials we interviewed said that OMB would need Treasury’s assistance to determine what information or criteria to include in a PART instrument tailored to examine tax expenditures. As of July 2005, OMB said that it was planning to review the health insurance tax credit program next year but that it has not decided whether the PART review will be limited to administration or will also cover the program’s tax policy purpose. As we move forward in shaping government for this century, the federal government cannot accept all of its existing programs, policies, functions, and activities as “givens.” Outmoded commitments and operations constitute an encumbrance on the future that can erode the capacity of the nation to better align its government with the needs and demands of a changing world and society. Reexamining the base of all major existing federal spending and tax programs, policies, functions, and activities by reviewing their results and testing their continued relevance and relative priority for our changing society is an important step in recapturing our fiscal flexibility and bringing the panoply of federal activities into line with 21st century trends and challenges. The decisions we face involve difficult choices about the appropriate size and role of the federal government and how to finance the federal government. The revenues forgone through tax expenditures reduce resources available to fund other federal activities or they require higher tax rates to raise a given amount of revenue. Reviewing their results and testing their continued relevance and relative priority is an important step in the process towards fiscal responsibility and national renewal. Such a fundamental review of major programs, policies, and activities, including tax expenditures, can serve the vital function of updating the federal government’s approach to meet current and future challenges. Regular and systematic evaluation will be necessary to inform policy decisions about the efficiency, effectiveness, and equity of tax expenditures or whether they are the best tool for accomplishing federal objectives within a functional area. Beginning the governmentwide reexamination process now would enable decisionmakers to be more strategic and selective in choosing areas for review over a period of years. Reexamining selected parts of the budget base over time may make the reviews more feasible and less burdensome, and it would allow decisionmakers to focus on all federal efforts—discretionary spending, mandatory spending, and tax expenditures—sharing common goals. Unfortunately, over a decade has passed since Congress encouraged systematic reviews of tax expenditures and since we made recommendations to facilitate such reviews and to display information on tax expenditures in the federal budget in a manner that enables policymakers to look at resource commitments across related outlays and tax expenditures. Although specific tax expenditures, such as the EITC and Liberty Zone tax benefits, have received varying degrees of scrutiny, efforts to date have not provided the Congress and others with an integrated perspective on the extent to which programs and tools— including tax expenditures—contribute to national goals and position the government to successfully meet 21st century demands. In addition, the lack of a requirement to disclose tax expenditures in agencies’ annual performance and accountability reports may result in important performance and cost related data not being fully considered with other federal resources allocated to achieve similar objectives. Although challenges must be overcome to provide systematic reviews of tax expenditures, these challenges cannot be addressed absent effective leadership within the Executive Branch. Accordingly, we are making several recommendations to OMB. To ensure that policymakers and the public have the necessary information to make informed decisions and to improve the progress toward exercising greater scrutiny of tax expenditures, we recommend that the Director of OMB, in consultation with the Secretary of the Treasury, take the following four actions: resume presenting tax expenditures in the budget together with related outlay programs to show a truer picture of the federal support within a mission area; develop and implement a framework for conducting performance reviews of tax expenditures. In developing the framework, (1) determine which agencies will have leadership responsibilities to review tax expenditures, how reviews will be coordinated among agencies with related responsibilities, and how to address the lack of credible performance information on tax expenditures; (2) set a schedule for conducting tax expenditure evaluations; (3) re-establish appropriate methods to test the overall evaluation framework and make improvements as experience is gained; and (4) to identify any additional resources that may be needed for tax expenditure reviews. develop clear and consistent guidance to Executive Branch agencies on how to incorporate tax expenditures in strategic plans, annual performance plans, and performance and accountability reports, to provide a broader perspective and more cohesive picture of the federal government’s goals and strategies to address issues that cut across Executive Branch agencies; and require that tax expenditures be included in the PART process and any future such budget and performance review processes so that tax expenditures are considered along with related outlay programs in determining the adequacy of federal efforts to achieve national objectives. We provided a draft of this report to OMB, Treasury, and IRS for their review and comments. We received written comments from OMB’s Associate Director for Economic Policy in a letter dated September 2, 2005. These comments are reprinted in app. II along with our analysis of certain issues raised by OMB. OMB disagreed with our recommendations and several of our findings, and also raised concerns about our use of Treasury’s tax expenditure estimates. Where appropriate, we made changes in our report in response to these comments. The Secretary of the Treasury did not submit comments, instead deferring to OMB. IRS staff provided a technical correction that we incorporated. In commenting on our report, OMB raised concerns about our use of tax expenditure estimates developed by Treasury and reported in the annual federal budget. For example, OMB commented that we accepted uncritically the concept of tax expenditures first advanced in the 1960s and said that we ignored limitations about the “volume” of total tax expenditures. To the contrary, the background section of our draft report, as well as several pages in app. III, clearly identified issues related to the tax expenditure concept, including that characterizing individual provisions as tax expenditures is a matter of judgment, and that disagreements exist about classifying what should be included in the income tax base. Pursuant to the Congressional Budget Act of 1974, the term tax expenditure, as our draft stated, has been used in the federal budget for three decades, and the tax expenditure concept—while not precisely defined—is nevertheless a valid representation of one tool that the federal government uses to allocate resources. Regarding the “volume” of tax expenditures, we acknowledged throughout our draft report limitations in the methodology of summing the individual tax expenditures. To provide an example of the extent that interaction effects among tax expenditure estimates can affect summing them, at our request, Treasury calculated total tax expenditures for five itemized deductions that took these effects into account; we included this information in our draft report. As our report stated, tax expenditure estimates—both those published in the budget as well as those produced by JCT—are the best and only data available to measure the value of tax expenditures and make comparisons to other spending programs. In our opinion, summing the estimates provides perspective on the use of tax expenditures as a policy tool and represents a useful gauge of the general magnitude of government subsidies carried out through the tax code. OMB also stated that we reported that more attention should be given to tax expenditures due to the severity of the nation’s long-term fiscal imbalance and stated that the Administration rejects any attempt to address the long-term fiscal imbalance with tax increases. To the contrary, we believe that tax expenditures, like other federal programs and activities, should be reevaluated as to their effectiveness and continued relevance as part of a periodic reexamination of what the federal government does and how it does business. Although the long-term fiscal gap heightens the need to ensure resources are not wasted, this reexamination would be appropriate regardless of the fiscal position. Further, OMB’s implication that focusing more attention on tax expenditures would automatically increase taxes is unfounded. As our report clearly stated, for any given level of revenue, the revenues forgone through tax expenditures require higher tax rates to obtain a given amount of revenue. Thus, if the evaluations of tax expenditures we call for lead to reducing or eliminating some tax expenditures, the net change after rate adjustments could, depending on overall congressional priorities and preferences, result in tax reductions for many taxpayers. We adjusted sections of our report to reinforce the point that reviewing tax expenditures is consistent with good stewardship of taxpayers’ resources and does not automatically result in tax increases depending on other related changes. At the same time, our current and projected fiscal imbalance serves to reinforce the need for reassessing all activities. We also added a recent estimate calculated by the Department of the Treasury for the President's Advisory Panel on Federal Tax Reform which showed that a tax system where basically all tax expenditures were eliminated could raise the same amount of revenue as the current tax system while lowering tax rates by about a third. OMB also stated that information on tax expenditures is not useful for budgeting and that tax expenditures have never been included in the congressional budget process. To the contrary, the tax expenditure list is legally required under the 1974 Congressional Budget Act and, before the expiration of the Budget Enforcement Act in 2002, PAYGO procedures restricted Congress’ ability to add new tax expenditures or expand existing ones unless offsetting funds could be raised. Whereas OMB favors reporting tax expenditures separately from the rest of the budget, we believe an integrated presentation is also useful to show the relative magnitude of tax expenditures compared to spending and credit programs across mission areas. This is not a recommendation to equate tax expenditures with outlays. We are recommending that OMB focus on integrating tax expenditures in the President’s budget presentation to show a truer picture of federal support in a mission area and on including tax expenditures under budget and performance review processes that apply to related spending programs. As our report stated, OMB began presenting tax expenditure sums alongside outlays and credit activity for each budget function in the federal budget from fiscal year 1998 through fiscal year 2002, but has discontinued the practice. Finally, OMB commented that it would be unwise to follow our recommendations for the conceptual and methodological reasons mentioned above, as well as for other practical reasons. We address OMB’s comments on our recommendation to resume including tax expenditures in the budget together with related outlay programs in the paragraph above. Regarding our recommendation to develop a framework for conducting performance reviews of tax expenditures, OMB stated that it has some potential promise but it is clearly a job for Treasury because no other agency has access to the data that would be needed to conduct such an analysis. However, we are not recommending that OMB be responsible for conducting the actual reviews, just for developing and overseeing the implementation of a framework for conducting the performance reviews. OMB would not need to have access to taxpayer data to manage the process. In addition, we recognize the challenges in using taxpayer data, which is the reason we recommend that OMB work in consultation with Treasury to develop and implement the framework. Also, our report recognizes the scarcity of evaluation resources, and we suggest factors that would be useful in taking a strategic approach to selecting and prioritizing tax expenditure evaluations. To make this point more apparent in our report, we added a fourth requirement to our recommendation to identify any additional resources that may be needed for tax expenditure reviews. OMB said that our recommendation to develop clear and consistent guidance to Executive Branch agencies on how to incorporate tax expenditures in GPRA reports would be counterproductive because agencies do not administer the tax code, and they should not be saddled with responsibility for something they do not control. OMB misstated our recommendation; this report does not recommend that agencies be responsible for administering parts of the tax code. As the tax expenditure chapter in OMB’s Analytical Perspectives volume of the fiscal year 2006 budget states, tax expenditures may also contribute to achieving goals identified in Federal agencies annual and strategic plans for their programs and activities. The aim of our recommendation was to provide a more cohesive perspective of the government’s programs and strategies—including tax expenditures—to address common federal goals. As our report states, in passing the Government Performance and Results Act, the Senate Governmental Affairs Committee called for inclusion of tax expenditures in the GPRA process so that more and better information would be available on the performance of tax expenditures themselves and the effects of tax expenditures would be considered in achieving federal performance goals. Our recommendation is consistent with this intent. Regarding our recommendation to require tax expenditures to be included in the PART process and any future such budget and performance review processes, OMB stated that it has no current plans to implement any of the recommendations in this report, but stated that other tax expenditures may be evaluated with the PART in the future. OMB also stated that the Department of the Treasury manages the tax code, so any new PARTs for tax expenditures would generally mean more PARTs for Treasury. Within the Executive Branch, major responsibility for management of the tax code was given to the Department of the Treasury. Given that the Administration is aiming to assess nearly 100 percent of federal outlay programs under PART, Treasury would be facing less scrutiny than other agencies to the extent that tax expenditures are not similarly evaluated under PART. Our recommendation merely calls for bringing tax expenditures in line with the performance management attention PART gives to outlay programs. Further, if our second recommendation to develop an evaluation framework for tax expenditures is implemented, OMB would be better positioned to target crosscutting reviews of related clusters of programs and activities. We are sending copies of this report to the relevant congressional committees and other interested parties. Copies of this report will also be made available to others upon request. In addition, the report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Mike Brostek at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. A decade ago, we examined the growth in tax expenditures and examined opportunities to focus policymakers’ attention on tax expenditures. To assist the Congress in reexamining the base of federal programs and policies critical to achieving fiscal discipline in the budget as a whole, this report updates our 1994 work. Specifically, this report describes (1) how tax expenditures have changed over the past three decades in reported number and aggregate size and in comparison to federal spending, revenue, and the economy; and (2) the progress that has been made since 1994 in how the Executive Branch scrutinizes tax expenditures. To meet each of our objectives, we relied on past GAO work, agency and congressional reports, and relevant tax expenditure literature. In addition, we interviewed agency officials from the Department of the Treasury’s Office of Tax Analysis; the Internal Revenue Service’s Office of Research, Analysis, and Statistics; the Office of Management and Budget (OMB); congressional staff from the Joint Committee on Taxation (JCT); and experts on tax policy to obtain a greater understanding of information gained through our literature review and to corroborate findings. To identify how tax expenditures have changed over the past three decades in number and size in terms of aggregate revenue loss and outlay- equivalents, we analyzed tax expenditure estimates developed by Treasury and reported by OMB in the Federal Budget’s Special Analyses, Appendixes, and Analytical Perspectives for fiscal years 1974 to 2004. Tax expenditure estimates are reported for individual and corporate taxpayer groups and categorized by budget function. We chose the tax expenditure estimates reported in the budget for our analysis because Treasury develops (1) revised estimates based on changes in tax policy and economic activity for the year prior to the reported fiscal budget year (i.e., retrospective estimates), and (2) outlay-equivalent estimates that facilitate comparison to federal spending. Even though Treasury’s estimates are retrospective, the final reported numbers are still estimates and may not reflect additional policy changes. Although the tax expenditure concept can also be applied to other kinds of taxes, such as excise taxes, this report only covers tax expenditures for the federal income tax system. We determined the number of tax expenditures for each fiscal year by adding the number of items in the list of tax expenditures reported by Treasury for each fiscal year. In certain fiscal years, Treasury reported estimates for select tax expenditures as two line items on their list, such as the expensing of exploration and development costs, which was split out as two tax expenditures, one pertaining to oil and gas and one for other fuels between fiscal years 1980 and 1995. To be consistent with Treasury’s reporting of these tax expenditures in years when they were listed as only one item, we summed the revenue loss estimates in the years they were listed as two tax expenditures and counted them as one. To determine the number of distinct tax expenditures across fiscal years, we reviewed the names and descriptions for each tax expenditure reported by OMB in the Budget’s Special Analyses, Appendixes, and Analytical Perspectives for fiscal years 1974 to 2004. We conducted two independent reviews to verify that our list contained only distinct tax expenditures across fiscal years. To assist in our review, we also relied on the descriptions reported in the Congressional Research Service’s compendiums on tax expenditures and legislative histories of certain tax expenditures, as needed. App. IV contains our compilation of all tax expenditures reported by Treasury between 1974 and 2004. We aggregated tax expenditure revenue loss estimates to measure growth over time. We also summed the revenue loss estimates by their reported corporate and individual basis to see how the amounts differed between the two taxpayer groups. We converted all sums for each fiscal year into constant dollars to adjust for inflation using the chain price indexes reported in the fiscal year 2006 federal budget. While summing tax expenditure estimates provides a useful perspective, aggregate numbers should be interpreted carefully due to interactive effects between tax expenditures and potential behavioral changes. To identify how tax expenditures have changed over the past three decades in comparison to federal spending, revenue, and the economy, we summed the outlay-equivalent estimates for each fiscal year and compared them to the federal budget position in aggregate. We used historical data on spending drawn from OMB historical tables and compared them to the sums for tax expenditure outlay-equivalent estimates in dollar value and as a percentage of GDP. We also used outlay-equivalent estimates to compare tax expenditure trends over time by budget function. Finally, we used historical data on spending by budget function from OMB historical tables and compared them to the sum of tax expenditures by budget functions for fiscal year 2004. We worked with Treasury officials to verify any discrepancies we found in using the tax expenditure estimates and modified our data accordingly. To determine the amount of progress since 1994 in how the federal government scrutinizes tax expenditures, we examined actions taken to implement our earlier recommendations to OMB intended to encourage more informed policy debate about tax expenditures and to stimulate joint review of related tax and spending programs. We recommended (1) developing a structure for conducting reviews of tax expenditures’ performance (2) conducting case studies to assess performance review structure (3) presenting tax expenditures in the annual budget, and (4) incorporating tax expenditures into the annual budgetary review process. We reviewed relevant literature, interviewed relevant agency officials and tax policy experts, and relied on previous GAO work to determine the progress that has been made in implementing our recommendations. We reviewed efforts to include tax expenditures under the Government Performance and Results Act’s statutory framework for strategic planning, performance measurement, and program evaluation. We also considered activities to include tax expenditures under OMB’s Program Assessment Rating Tool process. To describe how tax expenditures are measured and reported, we reviewed, but did not verify, the procedures used by the Joint Committee on Taxation and Treasury to estimate the magnitude of revenues forgone through tax expenditures or, in Treasury’s case, their outlay-equivalent values as well. As described in app. III, JCT and Treasury use different conceptual approaches to identify the provisions of the tax code they label as tax expenditures. In addition, their estimating models, macroeconomic assumptions, and choice of data cause their revenue loss estimates to differ somewhat. We conducted our work between August 2003 and July 2005 in accordance with generally accepted government auditing standards. The agency comments and evaluation section of this report discusses our overall comments on the Office of Management and Budget’s letter dated September 2, 2005. The following are our additional comments on issues raised by OMB. 1. See the agency comments and evaluation section of this report. 2. While we believe that the nation’s current and projected fiscal imbalance provides an additional impetus for engaging in such a review and reassessment, we believe tax expenditures should be reviewed and evaluated for efficiency and effectiveness even if there were no fiscal imbalance. We did not suggest that extra attention to tax expenditures would eliminate the long-term fiscal imbalance. As our report stated, substantive reform of Social Security and the major health programs remains critical to recapturing our fiscal flexibility. 3. Our report cites several examples of changes in the presentation of tax expenditures over time. For example, starting with the fiscal year 1999 budget, OMB began including a section outlining possible performance measures and issues in evaluating tax expenditures. This section was a first step in responding to congressional expectations for the Executive Branch to provide information about how tax expenditures meet their objectives and affect the performance of other federal programs. 4. We do not take for granted that tax expenditures are similar to spending programs. We devote a section of our background to describing how tax expenditures differ from, may substitute for, and work in conjunction with other spending programs to achieve policy objectives. Also, see the agency comments and evaluation section of this report. 5. In our report, we recommend adding useful comparisons to spending programs to the budget document, while not detracting from or changing in any way how the tax expenditure lists can be used to think about tax policies. 6. To the contrary, throughout our draft report we note and even emphasize the limitations in the methodology of summing the individual tax expenditures. In fact, to ensure that summing limitations of tax expenditures were clearly acknowledged, we discussed the limitations in (1) the introduction of our methodology, (2) a footnote in the Results In Brief section, (3) the section devoted to explaining the limitations which precedes our presentation of the trends in tax expenditures over time, and (4) a footnote for all 10 figures where we summed the tax expenditure estimates. In addition, we report a quantitatively significant example of interaction effects of tax expenditure estimates, which was developed by Treasury at our request. The example shows that the revenue loss calculation assuming the simultaneous elimination of several itemized deductions would be less than the sum of the revenue loss estimates for each itemized deduction, each calculated assuming the rest of the tax code was unchanged. As our report stated, tax expenditure estimates produced by Treasury and JCT are the best and only available data to measure the value of tax expenditures and make comparisons to other spending programs. In our view, summing the estimates provides perspective on the use of tax expenditures as a policy tool and represents a useful gauge of the general magnitude of government subsidies carried out through the tax code. Our report also cites several other researchers who have summed tax expenditure estimates to help gain perspective on the use of this policy tool and examine trends in the aggregate growth of tax expenditure estimates over time. 7. In this report, we provide a number of examples of studies we and others have done of tax expenditures; our reviews often are at the request of Congress, and OMB examined two tax expenditures under the Administration’s PART initiative. We also provide illustrations of the major legislation that has affected tax expenditures since the late 1970s. However, we stand by our statement that tax expenditures are not subject, or effectively subject, to several major processes that apply to outlay programs that increase the likelihood of reviews and, perhaps more importantly, increase the quantity and quality of information available to policymakers in determining whether and how to modify tax expenditures. Developing such enhanced information for policymakers and displaying it in a manner that facilitates their understanding of the total federal effort to address functionally related issues, e.g., ensuring adequate housing or stimulating economic development, is the thrust and intent of our report and recommendations. 8. We disagree with OMB’s characterization that the current tax expenditure presentation is “more than adequate” for the public and policymakers. We realize that the current budget volume is not organized by separate budget functions; however, OMB had previously presented revenue loss sums for tax expenditures alongside outlays and credit activity for each budget function in the federal budget from fiscal year 1998 through fiscal year 2002. As we state in our report, these summary tables were a useful starting point in highlighting the relative magnitude of tax expenditures and related outlay programs across mission areas. In addition, our current recommendation gives OMB latitude on how to present them with other outlay programs with similar purposes. Further, Congress has shown significant interest in reviewing all tools within a mission area. For example, recent congressionally-requested studies we conducted have reviewed all tools—including tax expenditures—used in the post-secondary education and energy areas. Also, see comment 2 as well as the agency comments and evaluation section of this report. 9. We disagree with the opinion that OMB has implicitly expressed that it would not have a leadership role regarding our second recommendation. First, we are not recommending that OMB be responsible for conducting the actual reviews, just to develop and oversee the implementation of a framework for conducting the performance reviews. OMB would not need to have access to taxpayer data to manage the process. Secondly, we recognize the challenges in using taxpayer data, which is the reason we recommend that OMB work in consultation with Treasury to develop and implement the framework. Third, taxpayer data may not be the only source of performance information on tax expenditures, which is why we recommend that the framework address the lack of credible performance information on tax expenditures. Finally, our report recognizes the scarcity of evaluation resources and we suggest taking a strategic approach to select and prioritize tax expenditure evaluations based on such factors as the relative priorities, costs, and risks associated with related clusters of programs and activities and that OMB select similar programs for review in the same year to facilitate comparisons and tradeoffs. To make this point more apparent in our report, we added a fourth element to our recommendation that OMB and Treasury in developing a framework for evaluating tax expenditures are to identify any additional resources that may be needed for tax expenditure reviews. 10. OMB misstated our recommendation. This report does not recommend that agencies be responsible for administering parts of the tax code. As we state in our report, in passing the Government Performance and Results Act, the Senate Governmental Affairs Committee called for inclusion of tax expenditures in the GPRA process so that more and better information would be available on the performance of tax expenditures themselves and the effects of tax expenditures would be considered in achieving federal performance goals. Our recommendation is consistent with this intent. Also, see the agency comments and evaluation section of this report. 11. Our recommendation aims to bring tax expenditures in line with the performance management attention PART gives to outlay programs. Our report discussed the two cases where OMB has applied PART to tax expenditures—the EITC compliance initiative and the New Markets Tax Credit. Within the executive branch, the Department of the Treasury has major responsibility for managing programs implemented through the tax system. Given that over the next 2 years the Administration plans to assess nearly all remaining executive branch outlay programs, Treasury would be facing relative less scrutiny than other agencies to the extent that the tax expenditure tool is not similarly evaluated under PART. Although OMB disagreed with our recommendations as a whole, we are encouraged that OMB is still considering how other tax expenditures could be evaluated with PART in the future. In moving forward, PART reviews of tax expenditures in isolation might be revealing, but we would urge a more comprehensive and crosscutting approach to assessing all tools—including tax expenditures—related to common goals. To understand the trends in the size of tax expenditures, it is helpful to understand how tax expenditures are measured and reported annually. This appendix explains the baselines used to distinguish tax expenditures from other provisions in the tax code and provides an explanation of the different methods that are used to measure tax expenditures. The Congressional Budget Act of 1974 defines tax expenditures as “those revenue losses attributable to provisions of the federal tax laws which allow a special exclusion, exemption, or deduction from gross income or which provide a special credit, a preferential rate of tax, or a deferral of tax liability. Both the congressional Joint Committee on Taxation (JCT) and the Department of the Treasury’s Office of Tax Analysis annually compile a list of tax expenditures and estimates of their cost each year. The Department of the Treasury’s (Treasury) tax expenditure estimates are included in the annual federal budget by the Office of Management and Budget (OMB). While, in general, the tax expenditure lists published annually by JCT and Treasury are similar, they differ somewhat in the number of tax expenditures reported and the estimated revenue loss for particular expenditures. Part of this difference arises because the organizations use different income tax baselines to determine tax expenditures. To determine the tax code provisions that satisfy the definition of a tax expenditure, the existing tax law must be compared or measured against an alternative set of tax rules that represent a baseline. The Congressional Budget Act did not define a specific baseline tax structure. As a result, the Treasury and the staff of the JCT have used judgment to define the different baselines that they use to develop lists of tax expenditures. Before the fiscal year 1983 budget, there were few differences between the Treasury and JCT tax expenditure lists because both organizations used a baseline patterned on a comprehensive income tax, which was deemed the “normal” baseline. JCT has used this baseline consistently over time in producing its tax expenditure list, while Treasury has modified its normal baseline over time and provided alternative baselines. In general, the normal income tax law baseline developed by both Treasury and JCT represents a broad-based income tax on individuals and a separate income tax on corporations. The normal baseline includes income from all sources, including wages and salaries, fringe benefits and other forms of employee compensation, interest income, dividends, realized capital gains, and net income from non- corporate businesses such as sole proprietorships and partnerships. The normal baseline generally allows for personal exemptions, deductions for costs incurred to earn income, and a standard deduction. Currently, the normal baselines used by both Treasury and JCT differ somewhat. Treasury’s normal baseline excludes several tax expenditures that are included in the normal baseline used by JCT and leads to several tax expenditures being reported by JCT only. For instance, the exclusion of Medicare hospital insurance benefits is included in the JCT list but this provision is not included in the federal budget tax expenditure list because Treasury views the exclusion of government benefits received in kind as part of its normal baseline. Additional examples of specific tax expenditures reported by only JCT or Treasury can be found at the end of this appendix in table 2. In the fiscal year 1983 budget, Treasury introduced the concept of a reference baseline. The reference baseline used by Treasury is also patterned on a broad-based income tax, but it is closer to existing law because tax expenditures by definition are limited to special exceptions that serve programmatic functions, such as national defense, income security, and education. Under Treasury’s reference baseline, two conditions are necessary for a provision to qualify as a tax expenditure: (1) The provision must be “special” in that it applies to a narrow class of transactions or taxpayers and (2) There must be a general provision to which the special provision is a clear exception. The set of general tax rules in the existing tax code is used as the standard by which various provisions are determined to be special. Whereas accelerated depreciation was considered a special rule exception under the normal baseline, these provisions were not considered tax expenditures under the reference baseline, because accelerated depreciation was considered to be the general treatment for the depreciation of business assets. The preferential tax rate for capital gains was included in Treasury’s tax expenditure list based on the general tax code rule that income from any source is considered taxable. For fiscal year 1983, Treasury began to report estimates using the reference baseline for some tax expenditures and then reinstituted reporting estimates for the normal baseline in fiscal year 1985. This reporting practice has continued to the present. In recent years, Treasury modified treatment of certain provisions under its normal and reference baselines and introduced two supplemental baselines. In the 2005 and 2006 budgets, Treasury excluded the reduced tax rate on dividends and capital gains that have already been taxed under the corporate income tax from the reference law baseline because it believes that since current law taxes these forms of corporate income twice, it is an inappropriate baseline to use. Also, in the 2004, 2005, and 2006 budgets, Treasury changed how it computed the accelerated depreciation tax expenditure under the normal baseline by using a measure of economic depreciation rather than straight-line depreciation as the baseline depreciation method, which was used in prior years. The measure of economic depreciation is generally faster than the straight-line method, so the tax expenditure estimates for accelerated depreciation for fiscal years 2002, 2003, and 2004 (from the 2004, 2005, and 2006 budgets) are smaller than what they would have been if the straight-line depreciation method were used. In addition, in the 2004 budget, Treasury began reporting two supplemental baselines, as discussed in figure 14. Both Treasury and JCT provide estimates of revenue loss, which is the amount of revenue that the government forgoes as the result of each special provision in the tax code. Revenue loss is estimated for each tax expenditure separately by comparing the revenue raised under current law with the revenue that would have been raised if the single provision did not exist, assuming that taxpayer behavior and all other tax and spending provisions remain constant. A revenue loss estimate does not represent the amount of revenue that would be gained if a particular tax expenditure were repealed, since repeal of the expenditure would probably change taxpayer behavior in some way that would affect revenue. Treasury and JCT tax expenditure lists will also differ because each organization uses a different de minimis amount, which is the minimum amount of revenue loss threshold for Treasury and JCT to report a tax expenditure. JCT excludes tax expenditure estimates that result in revenue losses that are less than $50 million over its 5-year projected period. For instance, the tax exemption for certain small insurance companies was not included in JCT’s January 2005 list of tax expenditures because the estimated revenue loss was below its de minimis amount. Treasury rounds all yearly estimates to the nearest $10 million and excludes tax expenditures with estimates that round to zero in each of the 7 years that it reports tax expenditure estimates. JCT and Treasury estimates of revenue loss also differ somewhat due to different economic and technical assumptions. For instance, JCT and Treasury use different sources for macroeconomic assumptions incorporated in their revenue loss estimates. JCT uses CBO macroeconomic assumptions in its tax expenditure projections and Treasury uses assumptions based on consultations with OMB, and the Council of Economic Advisers, the same assumptions used for the President’s budget. In addition to projecting future revenue losses, Treasury also reports re-estimates for the past fiscal year, which incorporate changes in tax policy and reflect more up-to-date economic and taxpayer data. Table 3 compares tax expenditure reporting by JCT and Treasury. In addition to revenue loss estimates, Treasury also measures tax expenditures in terms of their outlay-equivalent value, which allows the cost of a tax expenditure to be compared with a direct federal outlay, were each to provide the same benefit to the taxpayer. JCT does not produce outlay-equivalent estimates. The underlying economic assumptions used for the outlay-equivalent and revenue loss estimates are the same. However, to estimate outlay-equivalents, Treasury will increase—“gross up”—the revenue loss estimate by the average marginal tax rate that applies to the relevant taxpayers (the taxpayers that take the particular credit or deduction or earn the income that is excluded from tax). The result is an estimate of the amount of direct spending that would be needed to leave the relevant taxpayer with the same amount of benefit, after he or she paid tax on the amount received through the spending, as the taxpayer would get from the tax provision itself. For example, the outlay-equivalent estimate for the housing and meal allowances for military personnel tax expenditure reflects the additional pre-tax income that military personnel would have to be paid to raise their income after federal taxes by the amount of the benefits, so that it can be compared with other defense outlays on a consistent basis. An exception to this general rule of increasing the revenue loss estimate is made for tax expenditures that are believed to reduce the price of particular goods and services. In this case no gross up is made because a spending program that led to the same price reduction would not increase the tax liability of the taxpayer. For instance, revenue loss estimates for accelerated depreciation on rental housing and state prepaid tuition do not differ from the outlay-equivalent estimates for these tax expenditures. Outlay-equivalents can also differ from revenue loss estimates because they are calculated based on an even flow of virtual payments over the year to make the estimates comparable to actual outlay programs. Even for those tax expenditures that do not require a calculated adjustment, differences between the revenue losses and outlay-equivalents can occur solely because of differences in timing factors. Although revenue loss estimates can be affected by the collection patterns of the corporate and personal income taxes, the cash flow of direct spending programs can differ widely from the annual tax collection cycle. Of the 146 tax expenditures reported in the fiscal year 2006 budget, 91 were “grossed up” for the outlay-equivalent estimate with the implied rate varying across different provisions. Just as there is debate over which tax provisions should be listed as tax expenditures, tax experts do not always agree on whether specific tax expenditures should be grossed up or not. It may not be apparent to observers why the outlay-equivalent and revenue loss estimates are the same for some tax expenditures and why they differ for other tax expenditures. Other estimates of tax expenditures produced by JCT and Treasury also may differ from revenue loss estimates. These supplemental estimates are discussed in figure 15. Although there are differences between how Treasury and JCT develop and measure tax expenditures, the sum of revenue loss estimates from each list has followed relatively the same trend in the past. Figure 16 compares the sum of revenue loss estimates for JCT and Treasury since the last comprehensive tax reform, when the Tax Reform Act of 1986 was adopted. Since fiscal year 2002, the trends in the sums of the two sets of revenue loss estimates have diverged. Since the fiscal year 2004 Budget, Treasury’s estimates of dividends and capital gains tax expenditures are lower than JCT’s, at least in part, because Treasury changed its definition of the tax expenditures to reflect the reduced tax rates only on dividends and capital gains from sources other than corporate equity. Treasury also redefined the accelerated depreciation tax expenditures under the normal baseline to reflect depreciation relative to a replacement cost basis, rather than the historic cost basis previously used. Table 4 lists the tax expenditures and their associated revenue loss estimates that were reported by both Treasury and JCT for fiscal year 2004. The table details the number and size of tax expenditure estimates between the two lists. For example, in the National Defense budget function, the revenue loss estimate for the exclusion of benefits and allowances to armed forces personnel was estimated at $2.5 billion by Treasury and $2.7 billion by JCT. two tax expenditures not listed by Treasury. To identify how tax expenditures have changed over the past three decades in number and size, in terms of aggregate revenue loss, we analyzed the list of tax expenditures reported by the Department of the Treasury (Treasury) in the Budget’s Special Analyses, Appendixes, and Analytical Perspectives for fiscal years 1974 to 2004. The tax expenditures reported by Treasury during this period are listed in table 6. Adjusted gross income (AGI): All income subject to taxation under the individual income tax after subtracting above-the-line deductions, such as certain contributions for individual retirement accounts and alimony payments. Personal exemptions and the standard or itemized deductions are subtracted from AGI to determine taxable income. Alternative Minimum Tax (AMT): A separate tax system that applies to both individual and corporate taxpayers. It parallels the regular individual income tax system but with different rules for determining taxable income, different tax rates for computing tax liability, and different rules for allowing the use of tax credits. Baseline: A benchmark for measuring the budgetary effects of proposed changes in federal revenues or spending. Or, a benchmark for identifying and measuring exceptions to the basic provisions of the tax structure. CBO baseline: CBO’s estimate of spending, revenue, the deficit or surplus, and debt held by the public during a fiscal year under current laws and current policy. For revenues and mandatory spending, CBO projects the baseline under the assumption that present laws continue without change. For discretionary spending subject to annual appropriations, CBO is required to adjust the current year’s discretionary budget authority to reflect inflation, among other factors. Comprehensive income tax baseline: This baseline, also called Haig- Simons income, is the real, inflation adjusted, accretion to wealth arising between the beginning and ending of the year. It includes all accretions to wealth, whether or not realized, whether or not related to a market transaction, and whether a return to capital or labor. Inflation adjusted capital gains would be included in comprehensive income as they accrue. Consumption tax baseline: A broad-based consumption tax is a combination of an income tax plus a deduction for net saving. Many current tax expenditures related to preferential taxation of capital income and savings would not be considered tax preferences under a consumption tax (e.g., capital gains), but preferences unrelated to broad-based saving or investment incentives would remain tax preferences under a consumption baseline. Normal income tax baseline: The Budget Act did not specify the baseline income tax against which tax preference provisions should be measured, and deciding whether provisions are exceptions from the normal baseline is a matter of judgment. The normal income tax baseline is meant to represent a practical and broad-based income tax that reflects the general and widely applicable provisions of the current federal income tax. For the individual income tax, the Joint Committee on Taxation’s (JCT) normal tax baseline includes one personal exemption for each taxpayer, one for each dependent, the standard deduction, the existing tax rate schedule, and deductions for investment and employee business expenses. Itemized deductions that are not necessary for the generation of income but exceed the standard deduction level are classified as tax expenditures. Very similar in scope to JCT’s normal income tax baseline, Treasury's baseline is patterned on, but allows several major departures from, a comprehensive income tax, where income is defined as the sum of consumption and the change in net wealth during a given period. Reference tax law baseline: The reference baseline is closer to existing tax law and is also patterned on, but still allows several major departures from, a comprehensive income tax. Thus fewer tax provisions are considered tax preferences under the reference tax baseline than under the normal tax baseline. These include the lower tax rate for certain corporations, preferential rates on capital gains, accelerated depreciation, deferral of tax on income from controlled foreign corporations, etc. Budget function: One of 20 broad categories into which budgetary resources are grouped so that all budget authority and outlays can be presented according to the national interests being addressed. There are 17 broad budget functions, including national defense, international affairs, energy, agriculture, health, income security, and general government. Three other functions—net interest, allowances, and undistributed offsetting receipts—are included to complete the budget. De minimis rule: The level of revenue loss below which a revenue loss estimate is not reported for a tax preference. Direct loans: A disbursement of funds by the government to a nonfederal borrower under a contract that requires the repayment of such funds with or without interest. Discretionary spending: Outlays controlled by appropriation acts, other than those that fund mandatory programs. Entitlement authority: Authority to make payments (including loans and grants) for which budget authority is provided in advance by appropriations acts to any person or government if, under the provisions of the law containing such authority, the U.S. government is obligated to make the payments to persons or governments who meet the requirements established by law. Government Performance and Results Act (GPRA): Enacted in 1993, GPRA, also known as the Results Act, intends to improve the efficiency and effectiveness of federal programs by requiring federal agencies to develop strategic plans, annual performance plans, and annual program performance reports. Grants: A federal financial assistance award making payment in cash or in kind for a specified purpose. The federal government is not expected to have substantial involvement with the state or local government or other recipient while the contemplated activity is being performed. Gross domestic product (GDP): The value of all final goods and services produced within the borders of a country such as the United States during a given period. The components of GDP are consumption expenditures (both personal and government), gross investment (both private and government), and net exports. Mandatory spending: Also known as direct spending. Mandatory spending includes outlays for entitlement authority (for example, the food stamp, Medicare, and veterans’ pension programs), payment of interest on the public debt, and nonentitlements such as payments to the states from Forest Service receipts. By defining eligibility and setting the benefit or payment rules, the Congress controls spending for these programs indirectly rather than directly through appropriations acts. Tax expenditure: A revenue loss attributable to a provision of the federal tax laws that grants special tax relief designed to encourage certain kinds of behavior by taxpayers or to aid taxpayers in special circumstances. The Congressional Budget and Impoundment Control Act of 1974 lists six types of tax expenditures: exclusions, exemptions, deductions, credits, preferential tax rates, and deferral of tax liability. Preferential tax rates: A reduction of the tax rate on some forms of income, such as capital gains. Tax credit: An amount that offsets or reduces tax liability. When the allowable tax credit amount exceeds the tax liability, and the difference is paid to the taxpayer, the credit is considered refundable. Otherwise, the difference can be (1) allowed as a carryforward against future tax liability, (2) allowed as a carryback against past taxes paid, or (3) lost as a tax benefit. Tax deduction: An amount that is subtracted from the tax base before tax liability is calculated. Deductions claimed before and after the adjusted gross income line on the Form 1040 are sometimes called “above-the-line” and “below-the-line” deductions, respectively. Tax deferral: A provision allowing taxpayers to reduce current tax liability by delaying recognition of some income or accelerating some deductions otherwise attributable to future years. This can increase the taxpayer’s future tax liability, as the deferred income is eventually recognized, or reduce the deductions available on future income. Tax exclusion: An item of income that would otherwise constitute a part of the taxpayer’s gross income, but is excluded under a specific provision of the tax code. Exclusions generally do not appear on the U.S. Individual Income Tax Return (Form 1040), and excluded income is not reflected in total reported income. Tax exemption: A reduction in taxable income offered to taxpayers because of their status or circumstances. Tax expenditure revenue loss estimate: The measure of the revenue cost of each tax expenditure. The revenue cost is the difference between tax liability under current law and the tax liability that would result if taxes were recomputed without that tax expenditure. Revenue cost estimates assume (1) economic behavior does not change, and (2) all other tax expenditures remain in the code unchanged. Tax expenditure outlay-equivalent estimate: The amount of budget outlays that would be required to provide the taxpayer the same after-tax income as would be received through the tax provision. The outlay- equivalent measure allows the cost of a tax preference to be compared with a direct federal outlay, were each to provide the same benefit to the taxpayer. Unified budget: A comprehensive budget in which receipts and outlays from federal funds and trust funds are consolidated; generally a cash or cash equivalent measure in which receipts are recorded when received and expenditures are recorded when paid, regardless of the accounting period in which the receipts are earned or the costs incurred. In addition to the individual named above, MaryLynn Sergent, Assistant Director, as well as Eric Gorman, Edward Nannenhorn, Anne Stevens, and Lynn Wasielewski made key contributions to this report. Other individuals also contributing to this report included Ellen Grady, Susan Irving, Shirley Jones, Donna Miller, Amy Rosewarne, and William Trancucci. Attanasio, Orazio P. and Thomas DeLeire. “The Effect of Individual Retirement Accounts on Household Consumption and National Saving,” The Economic Journal, Vol. 112 (July 2002): 504-538. Brixi, Hana Polackova, Christian M.A. Valenduc, and Zhicheng Li Swift. Tax Expenditures—Shedding Light on Government Spending through the Tax System: Lessons from Developed and Transition Economies. Washington, D.C.: The World Bank, October 2003. Burman, Leonard E. “Is the Tax Expenditure Concept Still Relevant?” The National Tax Journal. vol. LVI, No. 3, (September 2003). Burman, Leonard E. and Jonathan Gruber. “Tax Credits for Health Insurance,” Tax Policy Center Discussion Paper No. 19. Washington, D.C.: The Tax Policy Center, June 2005. Carasso, Adam and Eugene Steuerle. “Tax Expenditures: Revenue Loss Versus Outlay Equivalents.” Tax Notes, Vol. 7, (October 13, 2003). Datta, Lois-ellin and Patrick G. Gasso. Evaluating Tax Expenditures: Tools and Techniques for Assessing Outcomes. San Francisco, Calif.: American Evaluation Association, 1998. National Taxpayer Advocate. 2004 Annual Report to Congress. Washington, D.C.: December 31, 2004. President’s Advisory Panel on Federal Tax Reform. “Understanding Tax Bases: Staff Presentation,” (presentation before the Panel’s public meeting, Washington, D.C., July 20, 2005, http://taxreformpanel.gov/meetings/docs/understanding_tax_bases.ppt (downloaded September 13, 2005). Rivlin, Alice M. and Isabel Sawhill. Restoring Fiscal Sanity 2005: Meeting the Long-Run Challenge. Washington, D.C.: The Brookings Institution, 2005. Salamon, Lester M. The Tools of Government: A Guide to the New Governance. New York, N.Y.: Oxford University Press, Inc., 2002. Sheils, John and Randall Haught. “The Cost of Tax-Exempt Health Benefits in 2004.” Health Affairs, (February 25, 2004). Slemrod, Joel and Jon Bakija. Taxing Ourselves: A Citizen's Guide to the Debate Over Taxes, 3rd Edition. Cambridge, Mass.: The MIT Press, 2004. Toder, Eric J. "Tax Cuts or Spending-Does It Make a Difference?" National Tax Journal, Vol. LIII, No. 3, Part 1. (September 2000). Toder, Eric J. The Changing Composition of Tax Incentives: 1980-99. Washington, D.C.: (The Urban Institute), March 1, 1999. U.S. Congressional Budget Office. Budget Options. Washington, D.C.: February 2005. U.S. Congressional Budget Office. The Budget and Economic Outlook: An Update. Washington, D.C.: August 2005. U.S. Congressional Budget Office. The Long-Term Budget Outlook. Washington, D.C.: December 2003. U.S. Congress. Joint Committee on Taxation. Estimates of Federal Tax Expenditures For Fiscal Years 2005-2009. JCS-1-05. Washington, D.C.: January 12, 2005. U.S. Congress. Joint Committee on Taxation. Options to Improve Tax Compliance and Reform Tax Expenditures. JCS-2-05. Washington, D.C.: January 27, 2005. U.S. Congress. Joint Economic Committee. Tax Expenditures: A Review and Analysis. Washington, D.C.: August 1999. U.S. Congress. Senate Committee on the Budget. Tax Expenditures: Compendium of Background Material on Individual Provisions. S. Prt. 108-54. Washington, D.C.: December 2004. U.S. Congress. Senate Committee on Governmental Affairs. Government Performance and Results Act of 1993 (P.L. 103-58). Washington, D.C.: June 15, 1993. U.S. Office of Management and Budget. “Tax Expenditures,” Analytical Perspectives, Budget of the United States Government, Fiscal Year 2006 Washington, D.C.: 2005. Understanding the Tax Reform Debate: Background, Criteria, and Questions. GAO-05-1009SP. Washington, D.C.: September 13, 2005. Internal Revenue Service: Status of Recommendations from Financial Audits and Related Financial Management Reports. GAO-05-393. Washington, D.C.: April 29, 2005. Financial Audit: IRS's Fiscal Years 2004 and 2003 Financial Statements. GAO-05-103. Washington, D.C.: November 10, 2004. Tax Administration: IRS Should Reassess the Level of Resources for Testing Forms and Instructions. GAO-03-486. Washington, D.C.: April 11, 2003. Tax Administration: IRS Should Continue to Expand Reporting on Its Enforcement Efforts. GAO-03-378. Washington, D.C.: January 31, 2003. Tax Administration: Impact of Compliance and Collection Program Declines on Taxpayers. GAO-02-674. Washington, D.C.: May 22, 2002. Tax Deductions: Further Estimates of Taxpayers Who May Have Overpaid Federal Taxes by Not Itemizing. GAO-02-509. Washington, D.C.: March 29, 2002. Tax Policy: Tax Expenditures Deserve More Scrutiny. GAO/GGD/AIMD- 94-122. Washington, D.C.: June 3, 1994. Climate Change: Federal Reports on Climate Change Funding Should Be Clearer and More Complete. GAO-05-461. Washington, D.C.: August 25, 2005. Student Aid and Postsecondary Tax Preferences: Limited Research Exists on Effectiveness of Tools to Assist Students and Families Through Title IV Student Aid and Tax Preferences. GAO-05-684. Washington, D.C.: July 29, 2005. Earned Income Tax Credit: Implementation of Three New Tests Proceeded Smoothly, But Tests and Evaluation Plans Were Not Fully Documented. GAO-05-92. Washington, D.C.: December 30, 2004. Community Development: Federal Revitalization Programs are Being Implemented, but Data on the Use of Tax Benefits Are Limited. GAO-04- 306. Washington, D.C.: March 5, 2004. Tax Administration: IRS Issued Advance Child Tax Credit Payments on Time but Should Study Lessons Learned. GAO-04-372. Washington, D.C.: February 17, 2004. New Markets Tax Credit Program: Progress Made in Implementation, but Further Actions Needed to Monitor Compliance. GAO-04-326. Washington, D.C.: January 30, 2004. Tax Administration: Information Is Not Available to Determine Whether $5 Billion in Liberty Zone Tax Benefits Will Be Realized. GAO-03-1102. Washington, D.C.: Sept. 30, 2003. Business Tax Incentives: Incentives to Employ Workers with Disabilities Receive Limited Use and Have an Uncertain Impact. GAO-03-39. Washington, D.C.: December 11, 2002. New Markets Tax Credit: Status of Implementation and Issues Related to GAO’s Mandated Reports. GAO-03-223R. Washington, D.C.: December 6, 2002. Public Housing: HOPE VI Leveraging Has Increased, but HUD Has Not Met Annual Reporting Requirement. GAO-03-91. Washington, D.C.: November 15, 2002. Student Aid and Tax Benefits: Better Research and Guidance Will Facilitate Comparison of Effectiveness and Student Use. GAO-02-751. Washington, D.C.: Sept. 13, 2002. Tax Policy and Administration: Review of Studies of the Effectiveness of the Research Tax Credit. GAO/GGD-96-43. Washington, D.C.: May 21, 1996. Our Nation’s Fiscal Outlook: The Federal Government’s Long-Term Budget Imbalance. http://www.gao.gov/special.pubs/longterm/. 21st Century Challenges: Performance Budgeting Could Help Promote Necessary Reexamination. GAO-05-709T. Washington, D.C.: June 14, 2005. Management Reform: Assessing the President’s Management Agenda. GAO-05-574T. Washington, D.C.: April 21, 2005. Long-Term Fiscal Issues: Increasing Transparency and Reexamining the Base of the Federal Budget. GAO-05-317T. Washington, D.C.: February 8, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 2005. Performance Budgeting: Efforts to Restructure Budgets to Better Align Resources with Performance. GAO-05-117SP. Washington, D.C.: February 2005. Opportunities for Congressional Oversight and Improved Use of Taxpayer Funds: Budgetary Implications of Selected GAO Work. GAO-04- 649. Washington, D.C.: May 7, 2004. Budget Process: Long-term Focus Is Critical. GAO-04-585T. Washington, D.C.: March 23, 2004. Results-Oriented Government: GPRA Has Established a Solid Foundation for Achieving Greater Results. GAO-04-38. Washington, D.C.: March 10, 2004. Performance Budgeting: Observations on the Use of OMB’s Program Assessment Rating Tool for the Fiscal Year 2004 Budget. GAO-04-174. Washington, D.C.: January 30, 2004. Fiscal Exposures: Improving the Budgetary Focus on Long-Term Costs and Uncertainties. GAO-03-213. Washington, D.C.: January 24, 2003. Federal Budget: Opportunities for Oversight and Improved Use of Taxpayer Funds. GAO-03-1030T. Washington, D.C.: July 17, 2003. Performance Budgeting: Current Developments and Future Prospects. GAO-03-595T. Washington, D.C.: April 1, 2003. National Saving: Answers to Key Questions. GAO-01-591SP. Washington, D.C.: June 2001. | Numerous federal programs, policies, and activities are supported through the tax code. As described in statute, tax expenditures are reductions in tax liabilities that result from preferential provisions, such as tax exclusions, credits, and deductions. They result in revenue forgone. This report, done under the Comptroller General's authority, is part of an effort to assist Congress in reexamining and transforming the government to meet the many challenges and opportunities that we face in the 21st century. This report describes (1) how tax expenditures have changed over the past three decades in number, size, and in comparison to federal revenue, spending, and the economy, and (2) the amount of progress made since our 1994 recommendations to improve scrutiny of tax expenditures. Whether gauged in numbers, revenues forgone, or compared to federal spending or the size of the economy, tax expenditures have represented a substantial federal commitment over the past three decades. Since 1974, the number of tax expenditures more than doubled and the sum of tax expenditure revenue loss estimates tripled in real terms to nearly $730 billion in 2004. The 14 largest tax expenditures, headed by the individual income tax exclusion for employer-provided health care, accounted for 75 percent of the aggregate revenue loss in fiscal year 2004. On an outlay-equivalent basis, the sum of tax expenditure estimates exceeded discretionary spending for most years in the last decade. For some budget functions, the sum of tax expenditure estimates was of the same magnitude as or larger than federal spending. As a share of the economy, the sum of tax expenditure outlay-equivalent estimates has been about 7.5 percent of gross domestic product since the last major tax reform legislation in 1986. All federal spending and tax policy tools, including tax expenditures, should be reexamined to ensure that they are achieving their intended purposes and designed in the most efficient and effective manner. The nation's current and projected fiscal imbalance serves to reinforce the importance of engaging in such a review and reassessment. Although data and methodological challenges exist, periodic reviews of tax expenditures could establish whether they are relevant to today's needs; if so, how well they have worked to achieve their objectives; and whether the benefits from specific tax expenditures are greater than their costs. Over the past decade, however, the Executive Branch made little progress in integrating tax expenditures into the budget presentation, in developing a structure for evaluating tax expenditure outcomes or in incorporating them under review processes that apply to spending programs, as we recommended in 1994. More recently, the Administration has not used its Program Assessment Rating Tool process to systematically review tax expenditures or promote joint reviews of tax and spending programs sharing common goals. |
Trade preference programs were instituted by several advanced economies in the 1970s as temporary measures to help developing countries pursue economic growth and development by increasing exports and diversifying their economies. Trade preferences, which reduce tariffs, or duties, for many products from eligible countries, are “nonreciprocal”—i.e., granted unilaterally, without requiring reciprocal liberalization for U.S. goods for countries receiving them. In 1968, the United States supported a United Nations (UN) resolution to establish a mutually acceptable system of preferences. To permit the implementation of the generalized preferences, in June 1971, developed countries, including the United States, were granted a 10-year waiver from their obligations under the global trading system, now embodied in the WTO, to trade on a most favored nation (MFN) basis. Following the granting of this waiver, developed countries created Generalized System of Preferences (GSP) programs, and Congress enacted the U.S. GSP program in January 1975. At the 1979 conclusion of the Tokyo Round of Multilateral Trade Negotiations, an agreement with no expiration date (known as the Enabling Clause) replaced the waiver. As they developed economically, beneficiary countries were to eventually move on from unilateral preferences and participate more fully in the global trade system, including by undertaking reciprocal trade commitments to liberalize their own markets. Under U.S. law, the AGOA program is linked to the GSP program in several important ways. For example, some of the authorities and limitations on what products can be included in the program are based on or relate to GSP, and country eligibility for AGOA involves first being eligible for GSP. But at the WTO, because the Enabling Clause applies to preference regimes that are “generalized, non-reciprocal, and non-discriminatory,” a separate U.S. waiver was sought for AGOA, as well as other regional preference programs. Despite relatively rapid economic growth in developing countries, developed countries have reformed and extended their programs and the number of preference programs has grown. The WTO’s 2014 World Trade Report shows that, over the 2000-2012 period, developing countries grew faster than developed countries, and now account for nearly half of world exports. Since 2010, Japan, the EU, and Canada have reformed and extended most of their preference programs. Congress recently passed legislation to extend AGOA and other U.S. preference programs by 10 years. Developed countries had already added more generous preferences for least developed countries (LDC) to their GSP programs by the mid-2000s, and some key developing countries have since introduced new trade preferences targeted at LDCs, including some in sub-Saharan Africa. For example, both China and India have established LDC programs in the last decade. These programs are part of broader strategies to strengthen commercial ties, according to Chinese government and international organization documents. According to the WTO, there are now a total of 27 preferential trade programs. They can be categorized into three types: GSP programs, which were put in place by developed countries to help all developing countries. Developed countries’ GSP programs also include subprograms, GSP-LDC, that offer least developed countries more generous preferences than the other GSP beneficiaries. LDC-specific programs (LDC programs), which were put in place by developing countries to help least-developed countries. Other preferential trade arrangements, which are offered by developed countries and focus on a particular country or subset of developing countries within a particular region. Seventeen countries or groups of countries, such as the EU and the Russian Federation, have GSP or LDC programs not restricted to a region.provider, and date of initial entry into force. Various factors can affect the performance of a trade preference program. These factors can be either (1) internal to the program, being key characteristics of the program itself, or (2) external to the program, being specific to the country or region in which the program is operating. Key program characteristics that can affect trade preference programs’ performance include the following: country eligibility, or the ability of a country to qualify for participation in the program; product coverage, which delineates products that may receive tariff rules of origin, which ensure that benefits of a preference program accrue to intended recipients by specifying a proportion of an imported product’s value that must be produced in the preference beneficiary country. The nature of these characteristics directly conditions whether developing country exporters can utilize trade preference program benefits. Trade experts also note that there are a range of factors external to trade preference programs and intrinsic to each recipient country that can also affect trade preference program performance. These include infrastructure—whether the country has adequate transportation, communication, and energy networks available to producers and exporters; governance, or the legal, regulatory and policy environment conducive to promoting market activity and international trade; regional integration—a country’s ability to engage in commerce and production with neighboring countries to facilitate supply chains and movement of goods to market; and access to finance to support investment and production. Success in improving on these development-related factors within preference beneficiary countries can make them more ready to meet the challenges and access the benefits of global integration. U.S. agencies provided over $5 billion in the 2001-2013 period to bolster AGOA countries’ capacity to engage in and benefit from trade. Other factors are also important drivers of trade between particular partners. Notably, trade experts have stated that factors such as the relative size of each country’s economy, respective rates of economic growth, geographic proximity, and cultural and historic ties between countries affect trade levels and are—in some cases—at least as important as program characteristics. The United States is one of the major markets for AGOA countries’ exports, but its rank among SSA’s export markets has declined as those of China and India have risen. Figure 1 shows the total amount of exports from AGOA countries to their major trading partners from 2010 to 2014. While the total dollar value of exports to many of the AGOA countries’ major trading partners decreased in 2013, in part because of falling commodity prices, the EU remained by far the largest market for AGOA country exports. AGOA country exports to the United States fell significantly after 2011 while AGOA country exports to China and India grew. As a result, by 2014, both China and India outranked the United States as AGOA country export markets. Other major export markets for AGOA countries include Brazil, Japan, South Korea, Australia, and Canada. AGOA has several notable differences from and similarities to other countries’ nonreciprocal trade preference programs, including programs in Canada, China, the EU, and India. (See app. II for a comparison of key characteristics of selected trade preference programs by country.) Available studies of AGOA and the EU’s preference programs suggest that these differences have affected program performance in terms of creating new trade flows (trade creation), increasing the range of products that are traded (diversification), and program utilization. AGOA’s country eligibility requirements are unique among those of other countries’ preference programs, in terms of geographic focus, income eligibility thresholds, and other policy and procedural requirements. Therefore, some countries that are eligible for AGOA are not eligible for other developed countries’ programs, whereas some countries deemed ineligible for AGOA receive preferences in China and India. (App. III shows the eligibility of SSA countries for selected trade preference programs by country.) Unlike most trade preference programs offered by other countries, AGOA restricts eligibility for participation to SSA countries. Of the 27 preferential trade arrangements in existence, 10 are specific to a region or country. The remaining 17 trade preference programs are not restricted to a region. Of the 10 regionally focused programs, only 2, AGOA and Morocco’s duty-free treatment for African LDCs, are specific to sub- Saharan Africa; the other 8 provide preferential benefits to other regions Table 2 compares the total number of countries and total or countries.number of SSA countries eligible for AGOA and selected GSP-LDC and LDC trade preference programs. The United States’ AGOA, followed by the GSP-LDC program in several developed countries, serve the most SSA countries. The income threshold for a country to be eligible for AGOA is less restrictive than the threshold for other developed countries’ trade preference programs based on income classifications determined by the World Bank and the United Nations. Countries may be eligible for AGOA (or the U.S. GSP) unless they are classified by the World Bank as high- income countries. In addition, to determine whether to designate an eligible country as a beneficiary country, the President also considers the economic development, per capita gross national product, living standards, and other economic factors deemed appropriate. AGOA coverage ranges from the poorest countries to those that are more economically advanced and is not limited to only LDCs. Some developed countries, such as those in the EU and Canada, base eligibility for their GSP programs on more restrictive income thresholds than AGOA. Previously, both the EU and Canada considered countries classified as “upper-middle-income” by the World Bank as eligible for their programs. However, after revising the programs, both the EU and Canada began to graduate from their programs upper-middle-income countries classified as such for at least 3 years for the EU and 2 years for Canada. For example, Gabon, Mauritius, and South Africa are eligible for benefits under AGOA but are no longer eligible for benefits or have graduated under the EU and Canadian GSP programs. All of the developing countries’ programs also have more restrictive income eligibility thresholds than AGOA, as they focus exclusively on LDCs. Notably, China and India, two of Africa’s most rapidly growing export markets in Asia, have both put in place preference programs for LDCs, with many African nations among the beneficiaries. the WTO, India was the first developing country to offer a preference scheme for LDCs. China’s preference scheme entered into force in July 2010 and is open to 40 LDCs, of which 30 are SSA countries. The UN classifies a total of 48 countries as LDCs; 34 of these 48 countries are in Africa. prohibition on the use of any form of forced or compulsory labor, a minimum age for the employment of children, and acceptable conditions of work with respect to minimum wages, hours of work, and occupational safety and health. Economic policies to reduce poverty, increase the availability of health care and educational opportunities, expand physical infrastructure, promote the development of private enterprise, and encourage the formation of capital markets through micro-credit or other programs. Several countries, including the Central African Republic, Eritrea, and South Sudan, have lost eligibility to participate in AGOA as a result of this annual review. For more information, see our recently issued report assessing the AGOA eligibility determination process. Because the programs of other countries, such as China and India, have procedural and policy requirements that differ from those of AGOA,some countries that are ineligible for AGOA are eligible under India’s and China’s preference programs. For example: Eritrea, the Gambia, Somalia, and Sudan are eligible for India’s preferences, but currently ineligible for AGOA. Eritrea and the Gambia lost AGOA eligibility because of human rights abuses in 2004 and 2015, respectively. Somalia and Sudan have not been eligible for AGOA and, according to U.S. agency officials, have not expressed an interest in the program. A recent analysis of pre-versus-post-program trends in India’s trade indicates that these partners’ exports to India grew faster than global exports in certain preference products (such as leather hides and skins for Eritrea and Somalia). Central African Republic, Democratic Republic of the Congo (DRC), Eritrea, and Sudan are eligible for China’s preference program, but are currently ineligible for AGOA. Central African Republic lost AGOA eligibility in 2004 following a coup, and the DRC lost AGOA eligibility in 2011 because of human rights concerns. UN data indicate that China’s trade with these partners ranged from $29.3 million for Central African Republic to $2.8 billion for the DRC and $5.9 billion for Sudan (North and South) in 2014. The United States, Canada, the EU, and Japan’s preference programs provide comprehensive duty-free coverage for almost all products that enter their country or region from eligible LDC beneficiaries, including most SSA countries. However, for SSA countries that are not considered LDCs, product coverage is less comprehensive. Key developing countries, notably China and India, have taken steps to offer more comprehensive product coverage under their preference schemes than before. However, the United States, the EU, Japan, and others continue to exclude products considered important to SSA countries, such as certain agricultural goods as well as some textiles and apparel. WTO and our analysis indicates that AGOA countries have comprehensive preferential access under existing preference schemes, and that AGOA’s coverage compares favorably with that of developed and developing country schemes. Developed country preference programs, including AGOA, the EU’s EBA, Canada’s GSP-LDC, and Japan’s GSP-LDC, provide duty-free coverage of more than 97 percent of tariff lines (which signify specific products) for LDC beneficiaries. A recent comparative analysis prepared by the WTO Secretariat indicates that, at 97.5 percent, AGOA’s product coverage for LDCs compares favorably with the United States’ GSP-LDC program as well as with other developed countries’ LDC programs. For example, the United States’ GSP-LDC program covers 82.6 percent of tariff lines, while Japan’s GSP- LDC program covers 97.9 percent, Canada’s 98.6 percent, and the EU’s EBA program 99.0 percent. Several AGOA countries, such as Ghana and Kenya, are not LDCs, and thus qualify only for regular GSP in other developed countries’ programs, which are less generous in terms of product coverage than GSP-LDC programs. For example, Canada’s regular GSP provides duty-free treatment for 492 tariff lines versus the 2,426 lines included under its GSP-LDC program. The EU’s regular GSP provides duty-free treatment for 2,994 lines versus the 6,932 lines under its LDC-oriented EBA. Generally, the product coverage of China and India’s programs is less than that of developed country programs, but expanding. For example, in 2010, China informed the WTO that its market access scheme to eliminate tariffs had expanded to cover 60 percent of tariff lines and in 2013 released an official statement indicating that all 30 LDCs in Africa with diplomatic ties to China would have a zero-tariff treatment covering this 60 percent, or 4,762 items. China also stated plans to further open its market to LDCs by expanding coverage to 97 percent of all tariff lines, by the end of 2015. India’s initial scheme, until its revision in April 2014, phased in duty-free access to 85 percent of its tariff lines over a 5-year period beginning in 2008 and ending in October 2012. It also offered reduced duties on 9 percent of its tariff lines. Furthermore, according to a WTO report, many developing countries such as China and India charge higher tariffs on imports than more developed countries. This practice makes the margin of preference available to beneficiaries under their preference programs high (as discussed in app. IV), which can translate into significant impact in terms of economic and trade growth. We conducted an analysis of trade-weighted preference program coverage in six of the leading AGOA markets based on the latest data available, for 2012—which showed that, with the exception of those of Australia, about 90 percent or more of AGOA countries’ exports in terms of value qualified for duty-free treatment or reduced duties under the selected preference programs (see fig. 2). The analysis assumes that all products that are eligible for duty-free treatment are duty-free, and thus represents trade-weighted product coverage. With respect to India, a sizeable share (89 percent) of the value of AGOA country exports were eligible for preferences, that at the time (2012) were in the form of reduced duties. India has been phasing its scheme by reducing duties; reportedly 92.5 percent of LDC exports were given preferential market access as of October 2012. Our finding corresponds with the key findings of an October 2014 WTO Secretariat report, which provides indicators of the global extent of duty- free product coverage for particular African countries under major available preference schemes. Key findings of an October 2014 WTO Secretariat report African countries dominate the list of individual LDCs that by 2012 enjoyed duty-free treatment for 90-100 percent of the value of their non-oil, non-arms exports to developed economies. African nations such as Lesotho, Mozambique, Angola, Guinea Bissau, Malawi, Senegal, and Tanzania were among those LDCs that recorded increases in the share of the value of exports imported duty-free between1996 and 2012. While many non-African countries also saw increases, the duty- free share for some such countries in 2012 was less than that for African LDCs. For example, the duty-free share for Cambodia, Bangladesh, and Myanmar ranged between 60 and 80 percent. AGOA, as well as the trade preference programs of other countries, including South Korea, Australia, and Japan, excludes some products that have high export potential and are considered important to enhancing growth in SSA beneficiaries’ economies, such as certain agricultural goods. Coverage of such products is important for two reasons. First, the WTO Secretariat has noted that textiles and apparel—key LDC exports— face the highest average tariffs in developed countries. Second, many African countries rely on just a few products for the bulk of their exports, and if preference schemes exclude those products from coverage, the programs effectively provide no benefit to them. WTO and UNCTAD analyses suggest that AGOA’s coverage of agriculture products for beneficiary countries is less extensive than that of some other developed countries’ programs. According to the Foreign Agricultural Service of the U.S. Department of Agriculture, 240 tariff lines are presently excluded from the U.S. GSP and AGOA and most are subject to tariff rate quotas, and, as a result, are not fully liberalized. The agricultural products excluded from AGOA include certain products within the general categories of beef, dairy, vegetables, peanuts, oilseed products, sugar and sweeteners, cocoa products, tobacco, wool, cotton, flax, and other processed agricultural products. However, the recently passed AGOA reauthorization legislation permits the President to provide duty-free access under the GSP and, by extension, for AGOA on certain previously excluded products, such as cotton for least-developed beneficiary developing countries. Several other countries’ preference programs also exclude some agricultural goods that are considered important to African country economies. A WTO Secretariat report found that Canada excludes dairy, eggs, and poultry; and Japan excludes rice, sugar, fishery products, and articles of leather. Although India excludes certain exports of interest to LDCs, it covers key products that at least one trade source determined to be of immediate interest to Africa. These products include cotton, cocoa, cane sugar, ready-made garments, and fish. AGOA’s product coverage of textiles and apparel for those LDCs that qualify appears to include more items than that of some other U.S. programs. This coverage is considered a key feature that contrasts AGOA with other programs. Our analysis, along with trade reports on rules of origin, shows that each preference program has specific rules of origin requirements with varying value-added requirements and calculation methods, making it difficult for beneficiaries to comply with a given program or use multiple programs (see app. V). The WTO reports that restrictive rules of origin can nullify the value of preferences, and as a result, several countries have taken steps to make their program’s requirements more flexible. All countries offering preference programs permit products that are wholly grown or produced within a beneficiary country to receive preferences. But for products that include foreign inputs, each program country uses its own methodology to determine whether sufficient local processing was performed on non-originating material for a product to be considered “local” or originating from a beneficiary. For example, the United States requires that a product must be imported directly from an AGOA beneficiary country and that the sum of the cost or value of the materials produced in one or more AGOA beneficiary countries plus the direct costs of processing operations performed in those countries may not be less than 35 percent of the appraised value at the time it enters the United States (is imported). In contrast, according to one trade policy expert, China’s rules of origin are stricter than those of AGOA because, at least 40 percent of value must be added in the exporting country, compared with the 35 percent regional value-added requirement for AGOA. In addition, China requires external inputs to undergo substantial transformation so that the resulting product would no longer enter under the same four digit code of the Harmonized System. Cumulation—permitting beneficiary countries to combine inputs from multiple sources to meet the local sourcing requirements—can ease the restrictiveness of rules of origin. Some developed countries’ preference programs provide wide scope for cumulation of inputs to reach the required minimum. Others, such as Japan’s, have restrictive cumulation rules. Specifically, five Southeast Asian countries are considered as a single territory for rules of origin purposes and may cumulate production under Japan’s GSP, but no cumulation among the other 146 GSP participants is allowed. Some developed countries have made efforts to make cumulation rules less restrictive. For example, the EU and Canada (1) widened the scope for cumulation of inputs to attain their required value-added threshold (in Canada’s case, across GSP and GSP-LDC preference beneficiaries and, in the case of the EU, with Free Trade Agreement/EPA partners), and (2) relaxed certain product-specific rules for LDC beneficiaries. An LDC group paper submitted to the WTO Rules of Origin Committee in October 2014 indicated that reforms by Canada and the EU have resulted in increased utilization of preferences, manufacturing capacity, and numbers of highly skilled jobs in LDCs. The recently passed AGOA reauthorizing legislation also modifies cumulation rules to help increase utilization. In the EU’s case, the European Commission noted that the changes were the product of a long and extensive review process that ultimately led it to postpone implementation of one proposed reform: changing from a paper-based system that enables beneficiary customs authorities to issue required certificates that qualify goods for entry under the EU preference programs, to an electronic one that would require exporters to register and place more responsibility on beneficiary country exporters that are registered in the EU’s new electronic system for compliance. This paper- based system was one of the aspects of the EU preference program that African governmental representatives we met in Brussels told us makes the EU program easier to use than AGOA. Making rules of origin requirements more consistent across preference programs could make it easier for beneficiary countries to use multiple programs, but efforts to standardize them have faced challenges. According to African officials we met with, multiple rules makes it more difficult for exporters to make use of available preference programs. They also increase the administrative burden associated with using preferences and may conflict with supply chain realities by requiring countries to adapt their manufacturing practices to meet different program requirements. As a result, the WTO LDC group has been pressing developed countries to ensure that preferential rules of origin applicable to imports from LDCs will be transparent, simple, and contribute to facilitating market access for non-agricultural products. WTO members agreed to work on this topic at the December 2013 ministerial conference in Bali and in 2015 LDCs have urged progress. However, there is no requirement to harmonize preferential rules of origin at the WTO and there has been reluctance on the part of the preference-granting countries to standardize the process, according to trade experts. Although we identified many studies on trade preference programs, relatively few studies compared their performance, and the results of these studies varied. Performance of preference programs is judged in economic literature in terms of their success in increasing exports (trade creation), increasing the range of products exported (diversification), and the extent to which available preferences are being used by recipients (utilization). With regard to these indicators, a 2014 USITC study of AGOA summarizing available comparative performance literature between the United States and the EU noted that in general (1) EU preferences had a greater effect on trade creation than U.S. programs; (2) U.S. preferences were more likely to help African suppliers diversify their exports by increasing the range of products traded than EU preferences; and (3) average utilization rates for preference programs of Australia, Canada, the EU, and the United States were often very high— even for small preference margins and small trade flows, according to the underlying research study cited in the USITC study. We also found a limited number of studies providing insights on the performance of some other countries’ preference programs, including those of India and China. In general, many trade experts agree that AGOA has been beneficial in helping expand trade between SSA countries and the United States. However, our prior work and available evidence suggest that AGOA has been only modestly effective in achieving its stated goals. Studies of the EU’s preference programs suggest that they also have had modest success in expanding trade with the beneficiaries of the programs. The USITC identified several studies that found that when compared with the United States, the EU has had greater overall success in increasing trade with Africa. One study attributed this result to EU imports being more responsive to price changes than U.S. imports. Another study concluded that the EU trade policy was more successful in creating SSA country exports in part because of the shorter distance and SSA countries’ longstanding colonial ties with the EU. The USITC noted that although these studies show the EU as more effective in increasing trade overall, other studies suggest that results are different when examined by sector. For example, one study found that after the United States put in place the “third-country fabric” provision enabling certain AGOA beneficiaries to use imported fabric for apparel production, AGOA created about seven times more apparel exports than But in the case of agriculture, the EU was found to the EU’s programs.be more effective at raising agricultural exports than the U.S. preference program. USITC identified several studies that generally concluded that U.S. preferences under AGOA were more successful than the EU’s in helping African suppliers diversify exports. One study from 2011 concluded that the EU’s GSP program provided an overall small effect on increasing exports, but no effect on diversification. explanations the authors offer are the EU’s then relatively-strict rules of origin for GSP. A number of studies identified in the USITC report found that overall, the United States’ AGOA and GSP programs and the EU’s GSP program both had high utilization rates for SSA LDCs. As mentioned earlier, utilization is the extent to which available preferences are being used by recipients and is considered a key indicator in comparing program performance. Low utilization rates suggest possible disincentives for countries using available preferences. Xavier Cirera, Francesca Foliano, and Michael Gasiorek, The Impact of GSP Preferences on Developing Countries’ Exports in the European Union: Bilateral Gravity Modelling at the Product Level, 2011,” accessed June 2, 2015, http://www.researchgate.net/publication/241767503_The_impact_of_GSP_Preferences_o n_Developing_Countries’_Exports_in_the_European_Union_Bilateral_Gravity_Modelling_ at_the_Product_Level . more than half of the SSA beneficiary countries were not well placed to utilize India’s programs. The study determined that it was difficult to conclude whether India’s preference scheme had had the desired impact on beneficiary country exports, largely because African LDC exports to India remained low. Sub-Saharan African (SSA) countries’ recent participation in bilateral and multilateral trade negotiations provides insights that can inform future negotiations. The bilateral Economic Partnership Agreement (EPA) negotiations highlight trade-offs that the EU and SSA countries considered to successfully conclude agreement negotiations. The negotiating choices the EU faced were complicated by the fact that countries with access to preferences that do not require them to liberalize access to their own markets have limited incentive to negotiate reciprocal agreements, according to trade experts with whom we spoke.Examining recent WTO negotiations provides insights about impediments to SSA country participation in multilateral negotiations, efforts to overcome those impediments, and the impact of those efforts. In light of upcoming trade events that will focus on issues of interest to African countries, the United States has a window of opportunity to draw insights from these negotiations that may help preserve U.S. interests. The negotiations between SSA countries and the EU that have resulted in EPAs indicate that achieving the goal of transitioning from unilateral trade preference programs to reciprocal trade agreements with SSA countries may require many years to finalize and implement, the establishment of time frames to end access to trade preference programs, a willingness to consider limiting the initial scope of the agreements, an acknowledgment that aspects of the agreements may have trade- offs and could constrain SSA countries’ ability to integrate into the global economy. Several African government officials and trade experts we spoke to stated that the EPAs could serve as a stepping-stone for other countries to negotiate with SSA countries reciprocal agreements that include wider liberalization of African markets. One insight from the EPA negotiations that could apply to other countries’ negotiations with SSA countries is that negotiations and implementation of reciprocal trade agreements may take many years to finalize. According to EU officials, EPA negotiations with SSA countries lasted far longer than expected. In September 2002, the EU and SSA countries (as well as other countries) began negotiating EPAs and after more than a decade of negotiations, according to the European Commission, the EU has concluded the first EPAs with some African regional groups. As of May 2015, the EU had concluded negotiations for EPAs with three African regions: West Africa, the Southern African Development Community (SADC), and the Eastern African Community (EAC). In addition, the EU has an interim EPA with 4 countries in the Eastern and Southern Africa region. In July 2014, Cameroon became the only country in the Central Africa region to ratify an interim EPA with the EU. Figure 3 shows which SSA countries have negotiated EPAs with the EU. As of May 2015, 32 SSA countries had negotiated regional or interim EPAs with the EU. In addition to entailing lengthy negotiations, the EPAs also contain multi- year phase-in periods before the reciprocal terms enter into effect, according to trade experts from one organization. This phase-in period acknowledges that the transition from unilateral trade preferences to reciprocal agreements will require a significant adjustment, especially for the poorest SSA countries. According to trade experts, the African signatories have committed to open between 75 (for West Africa region countries) and 98 percent (for Seychelles) of their markets to the EU, but this access phases in over a period of between 11 and 25 years, depending on the country or region.improved EU market access begins immediately upon concluding an agreement. Based on the experience of EPA negotiations, getting SSA countries to sign reciprocal agreements with the intent to help them integrate more fully into the global economy may require countries to institute concrete time frames for ending access to their other preference programs (see fig. 4). According to trade experts, original EU plans established 2007 as the target date for signing interim EPAs between the EU and SSA countries, but various obstacles delayed negotiations, including disincentives associated with unilateral preferences the SSA countries were already receiving from the EU. The experts said the EU set 2007 as its target date, in part because it was the year the EU’s trade preferences with SSA countries under the Cotonou Agreement and the associated WTO waiver were scheduled to expire, but it did not formalize a consequence if agreements were not concluded by that year. Between 2008 and 2014, trade experts reported that EPA negotiations continued, but only 5 of the 19 SSA countries that initialed interim EPAs signed and ratified them. For example, Zambia initialed an interim EPA, but did not subsequently sign or implement it. Cameroon signed an interim EPA in 2009, but did not ratify it until July 2014. According to trade experts we spoke to and literature we reviewed, when a unilateral trade preference is available, developing countries have less incentive to make domestic policy and regulatory reforms to meet the requirements of trade agreements. Trade experts also stated that there were a number of sticking points for which satisfactory compromises were difficult to find. For example, in the negotiations between the EU and SADC, market access issues and safeguards in the agricultural sector were sticking points. In addition, EU officials and other trade experts we met with indicated that SSA countries that already benefited from unilateral preferences had a disincentive to negotiate a reciprocal agreement with the EU that would require them to give up their ability to protect their industries or to lose tariff revenue, which constitutes a higher proportion of total government revenue than in developed countries. Trade experts stated that to expedite EPAs with SSA countries, the EU mandated a time frame for ending access for some SSA countries to its unilateral preference programs. According to trade experts and EU officials, the agreements were not ratified by the time the Cotonou preferences expired in 2007, so to prevent trade disruption, the EU passed legislation in 2008 that allowed provisional access to EPA preferences. In May 2013, additional legislation gave SSA countries until October 1, 2014, to ratify EPAs with the EU or automatically fall under the less favorable GSP program that the EU gives unilaterally to all developing countries. In addition, trade experts reported that as a result of the new EU GSP that entered into force on January 1, 2014, any upper- middle-income countries would no longer have access to unilateral trade preferences on the EU market. Some African government officials with whom we met said they felt that they and other African country officials had no choice but to sign EPAs and their successes in negotiating were limited by the time frame imposed upon the negotiations. U.S. officials also stated that African officials told them they felt obliged to sign the EPAs because they could not afford to lose preferential access to the EU market. To successfully negotiate reciprocal agreements with SSA countries, EPA negotiations demonstrated that EU willingness to consider limiting the initial scope of the negotiations helped avoid an impasse that could have resulted in failed negotiations. The United States abandoned its previous attempt at negotiating a more comprehensive, or “high standard” free trade agreement (FTA) with SACU, and did not pursue an FTA with the EAC after it became clear that the EAC was not ready or willing. To successfully conclude negotiations of EPAs with SSA countries, EU officials said the EU agreed to limit the initial scope of EPA negotiations, but the agreements also included language that allowed the parties to continue negotiations in other areas covered by more comprehensive FTAs. Initially, according to trade experts with whom we met, the negotiating topics the EU was seeking were similar to those found in more comprehensive FTAs, such as intellectual property rights. Recent U.S. FTAs have more than 20 chapters, while EPAs between the EU and SSA countries contain 8. Ultimately, EU and SSA countries agreed to focus EPA negotiations on reciprocal trade in goods (and developmental cooperation) according to EU officials and other trade experts. The experts also reported that EPAs with SSA countries did not include language detailing agreed-upon terms for issues such as investment, services, public procurement, and intellectual property rights. According to EU officials with whom we met, many African countries were not prepared to agree to terms in these areas. The officials said they excluded these areas to avoid further delaying the conclusion of EPAs with SSA countries. The EPAs include a “rendezvous clause,” according to trade experts, which states that signatories may continue negotiations after the conclusion of EPAs to amend them. However, the experts stated that the agreements do not include a timeline for concluding terms for the other issue areas and it is therefore unclear how long it may take to negotiate more comprehensive trade agreements. EPAs between the EU and SSA countries contain some features that create challenges for SSA countries’ integration into the global economy, according to trade experts and officials we met with from SSA countries and the EU. For example, although full or expanded access to EU markets may help SSA countries integrate more fully into the global economy, EPAs may diminish SSA countries’ leverage in future negotiations with other trading partners, according to trade experts. The EPA negotiations demonstrate that negotiating reciprocal agreements with SSA countries may involve some trade-offs that could impose burdens as SSA countries open their markets further to EU imports and affect SSA countries’ relations with other trading partners, including the United States. Trade experts and EU and African officials with whom we met reported that EPAs may have some positive effects on African economies and are therefore consistent with the EU goal to help SSA countries to further integrate into the global economy. For example, under EPAs, most SSA countries have full duty-free and quota-free market access in the EU, while others have greater access than they did before they signed an EPA, according to trade experts. The trade experts reported that even South Africa, which has less duty-free access than most SSA countries, has improved access for agricultural products such as wine, sugar, and fruit, as well as industrial products, notably motor vehicles. According to EU and African officials with whom we met, that greater access helps SSA countries export goods that they already produce and that are excluded by other trade preference programs. However, African officials and other trade experts have expressed that EPAs may have adverse effects on the economic development of some SSA countries. In many of our meetings, African officials expressed concern over the loss of tariff (and other) revenue that is a significant amount of their governments’ budgets as a result of the requirement that they provide the EU greater market access. Studies assessing the impact of EPAs on the African signatories have found that they will impose a fiscal burden on the countries, though the risk level differs from country to country. Recent meta-analysis of such studies found that the fiscal impact could be very high for eight countries (Benin, Cape Verde, Comoros, Djibouti, the Gambia, Ghana, Guinea-Bissau, and Togo), and that fiscal impact could be low for seven other SSA countries (Botswana, Lesotho, Malawi, Namibia, Nigeria, Swaziland, and Zambia.) The meta-analysis also found that the impact would depend on what steps the countries took to adjust during the phase-in period. Because the EPAs also contain a “standstill clause” that prohibits countries from enacting new customs duties or raising tariffs beyond those provided in the EPA agreement, the EU may be more insulated than other countries, including the United States, should the loss of tariff revenue become a fiscal problem. The terms of EPAs may also create challenges for SSA countries because, according to trade experts, their relative benefit to the SSA countries may diminish as the EU negotiates agreements with other non- SSA countries. The EU is negotiating trade agreements with other developed countries that may erode margins of preference for SSA countries. The experts reported that the agreements also focus on development of better rules and regulations not captured in EPAs that could increase competition with SSA countries in the EU market. In addition, trade experts reported the following about how EPA terms may constrain African countries’ leverage in trade negotiations with other countries, including the United States: EPAs contain a Most Favored Nations (MFN) clause stating that if African EPA signatories negotiate trade agreements in the future with other developed or large developing countries, they would have to extend any more favorable treatment offered to those countries to the EU as well. Key trading partners that would potentially want to deepen their trade relationship with SSA countries may be less interested in doing so if they know that they will not have any margin of preference over the EU. One think tank said that the MFN clause is against the spirit of the EPA itself, which states that the EPA is a way of fostering the integration of signatories into the global economy. However, at the insistence of African negotiators, most of the EPAs contain language stating that implementation of the MFN clause in EPAs is not automatic, but must be negotiated on a case-by-case basis. Trade experts report that EPAs may also negatively affect other trading partners and multilateral trade negotiations. World Bank simulations of the impact of the West Africa EPA on Nigeria project that most of the 7-20 percent increase in imports from the EU will divert trade from the rest of the world, including the United States. The EPAs that are currently in place also set precedents in terms of rules and exclusions that may act as disincentives to multilateral liberalization, according to trade experts. For example, the experts state that the SADC EPA permits SSA countries to levy export taxes in certain circumstances. In addition, the EPA contains a protocol that secured EU and South African protection of geographical indications for numerous products, including mainly food and alcohol. In general, the United States has opposed export taxes and expanding mandatory protection of geographical indications. An examination of the involvement of SSA countries in recent multilateral negotiations at the WTO yields insights about impediments to SSA country participation in multilateral negotiations, efforts to overcome those impediments, and what impact SSA country participation could have in future WTO negotiations. Although capacity constraints have impeded SSA country participation in multilateral trade negotiations, efforts to address those impediments have been increasing. In part as a result, SSA country participation at the WTO has been expanding. SSA countries face a number of impediments to full participation in trade negotiations, especially at the multilateral level. Officials from multilateral organizations, NGOs, and AGOA-eligible countries we met with noted that, although SSA countries vary in their level of participation, many of the countries lack capacity in the following areas, making full participation difficult. Funding: African officials and other trade experts we met with said that many of the SSA countries, especially those considered LDCs, find it difficult to afford the costs associated with participating in multilateral trade negotiations, such as transportation to and from Geneva and maintaining staff there. Staffing levels: Some SSA countries do not have a permanent mission with staff to represent them in Geneva. Many others, including LDCs and small countries, have only a few staff to represent them in meetings and negotiations at the WTO, numerous UN agencies, and other multilateral organizations. Officials from several AGOA-eligible countries told us they often miss important meetings—sometimes including trade negotiations—because many of those meetings are scheduled concurrently. Expertise: WTO negotiations include numerous complex topics, and according to WTO and African government officials with whom we met, some SSA countries’ negotiators lack specialized training and experience to negotiate effectively. Communication: Some SSA countries lack effective communication and coordination between negotiators and government officials in their domestic capitals, according to African officials and trade experts with whom we met. As a result, in some cases negotiators in Geneva have worked toward positions that contradicted the priorities of government officials at home. Efforts to overcome impediments to African participation in trade negotiations have been increasing. According to one WTO trade expert, the number of WTO-sponsored capacity-building activities for African countries increased from 324 in 2000 to 1,513 in 2010, a nearly five-fold increase. Trade experts and African officials shared numerous examples of training courses, services, and programs designed to build African capacity to negotiate more effectively and implemented by the WTO, UNCTAD, the World Bank, nongovernmental organizations (NGOs), and bilateral trade partners. For example, the WTO hosts and funds a program called Geneva Week twice a year, paying for African officials from countries without permanent representation in Geneva to fly in and participate in trade negotiations and training courses focused on topics such as the WTO structure and negotiation process. NGOs and multilateral organizations also provide analysis at the request of African delegations that helps them better understand key issues and relevant context surrounding particular trade negotiations. Several African officials with whom we met said many SSA countries also relied on research and analytic support by NGOs such as the South Center. Another effort to overcome impediments to participation in trade negotiations has been the development of groups that negotiate on behalf of multiple countries. SSA countries belong to groups such as the African group, the LDC group, and the ACP group that establish consensus- based priorities and negotiate on behalf of the groups’ members. According to several African government officials, smaller countries that previously felt unable to effectively participate in multilateral trade negotiations have had their priorities better represented in negotiations through these groups. According to government officials from the United States and AGOA- eligible countries, as well as officials from trade-related multilateral organizations and think tanks, overall African participation at the WTO is expanding and many African negotiators are negotiating more effectively, Officials provided in part as a result of efforts to overcome impediments.the following examples of greater SSA official presence at the WTO and of priorities that African officials have successfully negotiated in multilateral negotiations. According to trade experts, African countries or groups have been visible in current rounds of global trade negotiations on topics such as cotton, intellectual property rights, public health, and special and These topics are ones where Africans sought differential treatment.and ultimately secured concessions from the United States and other WTO members. A WTO trade expert found that although the share of African chairmanships at the WTO between 1995 and 2010 was low overall, Africans made up at least 25 percent of the chairmanships of several bodies during that time period and characterized several of those chairs as competent, active, and experienced. With a number of events relating to WTO and bilateral negotiations coming up that will focus on issues of interest to African countries, there is a window of opportunity for the United States to engage with SSA countries to ensure that U.S. interests are preserved. For example, according to the WTO, Kenya’s foreign minister has played a role in setting the multilateral negotiation agenda by prioritizing the ratification of the Trade Facilitation Agreement. According to WTO officials, while there is no official deadline for achieving the necessary acceptance, the foreign minister has set a goal of having this process completed by the time of the WTO’s 10th ministerial conference, in Nairobi, Kenya, in December 2015. Two-thirds of the WTO’s 160 members will need to ratify the Trade Facilitation Agreement before the protocol of amendment to the WTO can go into effect, according to the WTO. Although Botswana and Mauritius are among the eight WTO members that had, as of June 18, 2015, secured domestic acceptance, several African delegations, including AGOA recipients Nigeria and South Africa, have highlighted challenges they face in ratifying the Trade Facilitation Agreement. The Senate report accompanying the AGOA reauthorization legislation stated that the United States should seek opportunities to expand its ties with SSA countries through the negotiation of trade agreements that involve SSA countries. Recent LDC priorities are evidence that progress in multilateral negotiations is possible, but they raise some concerns for the United States and, in some cases, AGOA countries. The LDC group has drafted a series of negotiation priorities in conjunction with the Doha round of global trade talks, which was to achieve improvements in agricultural disciplines and market access for both agricultural and non-agricultural goods. The December 2015 WTO ministerial conference is slated to revisit several topics of importance to Africa, including a potential LDC package and a decision on whether and how to proceed with the overall Doha round. Among the LDC priorities are for countries to further facilitate market access for LDC products. Both the United States and AGOA countries may face competing interests in committing to the requests of LDC negotiators. For example, available research suggests that AGOA countries would lose out to other suppliers if the textiles and apparel access presently provided to them were given to other LDCs such as Bangladesh and Cambodia. According to several African officials we met with from AGOA countries, competing with other LDCs without additional preferences is a major concern. This additional competition could also minimize the effectiveness of AGOA in increasing and diversifying exports from AGOA countries, which would run counter to U.S. interests. The legislation reauthorizing AGOA states that, among other things, it is in the interest of the United States to boost trade between the United States and SSA countries and that it is a U.S. goal to stimulate economic development in Africa and diversify sources of growth in sub-Saharan Africa. It is unclear whether LDC countries will insist upon some or all of these changes before they support continued Doha round negotiations. Some AGOA countries are showing interest in WTO membership. Liberia and Ethiopia, two AGOA countries that are not yet members of the WTO, are in the process of acceding to the WTO. This process typically involves acceptance of existing WTO rules as well as negotiated market access commitments, demonstrating that they are declaring themselves ready to work toward fuller integration into the global economy. The Senate report accompanying the legislation reauthorizing AGOA stated that the United States should seek all opportunities to deepen and expand its ties with SSA countries through accession by SSA countries to the WTO. The United States government has also proposed and begun to pursue plans to enhance bilateral trade with SSA countries. The recently-passed AGOA extension legislation includes plans to evaluate which AGOA countries are ready for and interested in pursuing reciprocal free trade agreements. In the immediate term, the United States is working with the Eastern African Community, reaching a cooperative agreement in February 2015 on non-tariff barriers in the Sanitary and Phytosanitary Measures and Technical Barriers to Trade areas with a goal of easing non-tariff barriers to trade. The United States government has also indicated it would like to build on the recently signed Trade and Investment Framework Agreement with the Economic Community of Western African States and that it hopes to advance negotiations with countries such as Nigeria, South Africa, and Angola. We are not making any recommendations in this report. We sent a draft of this report to the Secretaries of Agriculture and State, the Chairman of USITC, and the U.S. Trade Representative for comment. State provided no comments and the others provided technical comments, which we incorporated in this report, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Agriculture, Commerce, State, and the Treasury; the Administrator of USAID; the Chairman of USITC; the U.S. Trade Representative; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. In conducting our work, we identified and reviewed documents, data, and literature on the African Growth and Opportunity Act (AGOA) and other countries’ trade preference programs, trade agreements with AGOA countries, and sub-Saharan African (SSA) countries’ participation in multilateral trade negotiations. In addition, we interviewed officials from the Departments of Agriculture, Commerce, State, and the Treasury; the Office of the U.S. Trade Representative (USTR); the U.S. International Trade Commission (USITC); and the U.S. Agency for International Development (USAID). We conducted fieldwork in Geneva, Switzerland, and Brussels, Belgium. We selected those locations for fieldwork based on the availability of officials for interviews, including multiple U.S. agency officials, African government officials, government officials from countries with trade preference programs,and multilateral organizations with pertinent expertise. We selected countries and regions for comparison that are major export markets for sub-Saharan Africa, including the European Union (EU), China, and India. We identified three key characteristics for comparison because studies of preference programs generally show that these characteristics can affect program performance in increasing and diversifying trade. However, we did not independently assess the other countries’ trade preference programs and agreements with SSA countries. The information on those programs and agreements is based on interviews and both World Trade Organization (WTO) and other official sources. Insights discussed herein reflect our conclusions based on the views of those we interviewed as well as selected academic literature. and representatives from think tanks To examine how selected countries’ trade preference programs compare with AGOA in terms of key characteristics and performance, we focused on those countries that had the highest levels of trade in dollar- denominated nominal terms—not adjusted for inflation—with AGOA countries and for which trade preference program information was available from official sources including the WTO, the United Nations Conference on Trade and Development (UNCTAD), and country provider government documents. Specifically, to identify the major country and regional markets for AGOA-country exports, we used the WTO, UNCTAD, and World Bank integrated trade data, which we accessed through the International Trade Centre’s Trade Map portal. We organized the nominal import data in descending order by major AGOA-country exporters and then graphed the top eleven export markets from 2010 through 2014. These exports of AGOA countries are for total exports, whether or not under a preference program. These countries include the United States, those in the EU, Canada, South Korea, Japan, Taiwan, Indonesia, India, Australia, and China. We also reviewed official WTO, UNCTAD, and government documents on these countries’ programs. To examine the differences in preference programs of countries that are major importers from AGOA countries, we (1) analyzed recent AGOA- country exports to major importing countries/regions and (2) analyzed cross-country differences in preference and non-preference import programs from AGOA countries. To analyze trade-weighted coverage across preference programs in major AGOA-country export markets, we used import data from the WTO’s Integrated Data Base (IDB) accessed through the World Integrated Trade Solution of the World Bank (http://wits.worldbank.org/). We summed imports by preference and non- preference programs (such as Generalized System of Preferences, Least Developed Country programs, Everything but Arms, Most Favored Nation arrangements, etc.), and by dutiable and non-dutiable categories. For this analysis, we were not able to estimate utilization rates, as there were no publically available data on the amount of imports actually entering under the preference programs for certain major country importers. However, as a first approximation of program performance, we assumed that if imports were eligible to be imported under a preference program, they actually came in under the program. Another limitation to our analysis was that we were not able to obtain import and preference program data on a tariff- line basis from some other major AGOA-country importers, notably China. Therefore, we were not able to compare U.S. programs’ trade-weighted coverage with that of China. For each of the data sets that we used, we examined the data and found them sufficiently reliable for the purposes of our report. Specifically, we obtained and assessed official documentation such as users’ guides, frequently asked questions, and disclaimers; met with officials to discuss our planned use of the data and any limitations; and conducted spot checks against other authoritative sources. Country eligibility. To determine country eligibility, we used the WTO Preferential Trade Arrangements database which identifies and compiles information to identify all preferential trade agreements implemented by WTO members. Information includes required submissions by WTO members. We used the most up-to-date information available in the database, which varied by country. We also used UNCTAD data to determine selected trade preference programs’ country eligibility. We cross-checked the information found in the WTO database with the UNCTAD data for eligibility as of January 1, 2015. We also used other countries’ government websites, as available, to determine SSA country participation within its preference program. Furthermore, World Bank and United Nations information was used to identify income classification and some procedural and policy requirements. Product coverage and exclusions. To compare product coverage we utilized WTO information including a 2014 WTO Secretariat report to identify tariff and trade data used to assess the extent of duty-free access available to AGOA countries’ exports under various other countries’ preference programs. We used tariff-line trade data presented in the database, which are based on member notifications. We also used government and non-government sources to gather information on product coverage offered by other countries’ preference programs. Rules of origin. To report generally on rules of origin and specifically on rules of origin percentage levels, calculation methods, and cumulation, we used information from the WTO along with government and non- government sources. We also conducted a semi-structured interview with a sample of United States and foreign government officials, including African officials, along with trade experts. We discussed their knowledge of other countries’ trade preference programs, including varying rules of origin and the impact of rules of origin on preference program utilization. Performance. To determine performance among AGOA and other preference programs we used a 2014 USITC report on AGOA that includes a comprehensive economic literature review on selected countries’ trade preference programs. The study identified 48 economic studies of preference programs, published from 2001 to 2013; of that number, 11 compared the performance of programs, and 7 of the 11 compared the performance of the United States’ and the EU’s programs—in part because the United States and the EU are two of the few preference providers that provide data on actual imports under their preference programs. To examine AGOA countries’ participation in trade negotiations, we used WTO data to determine the bilateral and multilateral trade agreements to which AGOA countries are party. To identify insights from AGOA country trade negotiations, we chose to focus on AGOA countries’ Economic Partnership Agreements with the EU because (1) congressional requesters specifically asked about the impact of these agreements, (2) negotiations for the regional agreements were all concluded recently—in 2014, (3) the EU is the largest recipient of SSA exports, and (4) the agreements replace a unilateral trade preference arrangement with a reciprocal trade arrangement. We reported insights based on our literature review on the negotiations leading up to (and the terms of) the agreements as well as interviews with trade experts from AGOA countries, U.S. government agencies, other key trade partners with AGOA countries, multilateral trade organizations (including the WTO), and think tanks. To determine impediments to SSA countries’ participation in trade negotiations, we reviewed the USITC report on AGOA and other relevant literature. We also developed a semi-structured interview tool. We selected a sample of officials and experts who would participate in semi- structured interviews on the basis of (1) GAO selective sampling based on literature review, (2) recommendations from U.S. government officials knowledgeable about other countries’ trade negotiations with Africa, and, where applicable, (3) availability during our fieldwork in Geneva and Brussels. We implemented the semi-structured instrument in interviews with trade experts from AGOA countries, U.S. government agencies, other key trade partners with AGOA countries, multilateral trade organizations (including the WTO), and think tanks. From the literature and semi-structured interviews, we also examined efforts to address impediments to SSA countries’ participation in trade negotiations and the impact of those efforts. Our observations about upcoming trade negotiations involving AGOA countries are based on WTO documents and official government documents. Table 3 provides a comparison of key characteristics of selected countries’ trade preference programs with sub-Saharan African (SSA) countries. The countries or groups of countries included in the table are the major destinations for SSA exports. Sub-Saharan African countries are eligible for preferences under multiple countries’ preference programs (see table 4). Most of the programs are generalized (available to all developing countries, and known as Generalized System of Preferences (GSP) schemes,) but some programs focus exclusively on least-developed countries (LDCs). Some countries that had been eligible for preferences offered by the European Union (EU) have transitioned from the EU’s preference programs to Economic Partnership Agreements. Although the extent of product coverage and duty-free versus reduced duties are important when evaluating the potential impact of programs, it is also important to consider the margin of preference for program beneficiaries. The margin of preference is defined as the difference between the tariffs all countries face—known as the most favored nation (MFN) rate—versus the tariff charged preference program beneficiaries. Essentially it is the price advantage that the program accords beneficiaries over other competitors in their market. In the case of programs that provide duty-free treatment to covered products, the tariff charged is zero, and thus the margin of preference is the entire value of that difference. Thus, if a country had an average MFN tariff of 10 percent and accorded preference program beneficiaries duty-free treatment, the beneficiary would enjoy a 10 percent margin of preference in that market. In the case of programs that provide reduced duties, the margin of preference would be a fraction of that. For example, if the preference program provider’s average MFN tariff was 10 percent and its preference program involved a 50 percent reduction in duties, beneficiaries of its programs would face a 5 percent tariff and have a 5 percentage point advantage over competitors. As shown in table 5, according to WTO Secretariat calculations, many developing countries have substantially higher tariffs than developed countries on key products from LDCs. For example, developing country tariffs on agriculture are more than 20 times as high as those in developed countries, and tariffs on clothing and textiles are about twice as high. Several of these developing countries, such as China and India, have put preferences in place for LDCs. All other things being equal, a producer from a country that has access to both developed and developing countries’ programs may focus first on using the one with the highest margin of preference. Table 6 shows the different factors considered by selected countries when determining whether a finished product qualifies to enter under its preference program. In addition to the contact listed above, Kim Frankena (Assistant Director), Jeffrey Baldwin-Bott (analyst in charge), Ashley Alley, Kenneth Bombara, Debbie Chung, Barbara El Osta, Jill Lacey, and Shakira O’Neil made key contributions to this report. | AGOA was signed into law in 2000 to offer SSA countries trade preferences that stimulate export-led economic growth and facilitate their integration into the global economy. This legislation was recently reauthorized, and the accompanying Senate report stated that the United States should seek all opportunities to deepen and expand its ties with SSA countries through accession by SSA countries to the WTO and negotiation of bilateral trade agreements. GAO was asked to identify lessons learned from other countries' trade preference programs and SSA countries' recent trade negotiation experiences. This report (1) compares AGOA with selected countries' trade preference programs in terms of key characteristics and performance, and (2) examines AGOA countries' participation in trade negotiations. GAO reviewed and analyzed documents and data, including information from the WTO to determine what portion of the imports from AGOA countries to major trade partners could enter under preference programs. GAO also interviewed officials from U.S. government agencies, African and other foreign governments, and international organizations. GAO selected countries and regions for comparison that are major export markets for sub-Saharan Africa, including the EU, China, India, and Japan. The United States' African Growth and Opportunity Act (AGOA) has differences from and similarities to 26 trade preference programs offered by other developed and developing countries in three key areas that can affect program performance in increasing and diversifying trade. Country eligibility. AGOA is unique in that it focuses eligibility on sub-Saharan African (SSA) countries. Most other countries' trade preference programs do not restrict eligibility to SSA countries. Product coverage. AGOA and some other countries' trade preference programs provide fairly comprehensive coverage of products, but exclude some agricultural and other products that are important SSA exports. Rules of origin. Like other countries' trade preference programs, AGOA has rules of origin that determine which products qualify for coverage. Some countries, including the United States have recently made their rules of origin less restrictive, to make it easier for beneficiary countries to take greater advantage of these programs. In 2014, the United States International Trade Commission reviewed studies comparing the performance of trade preference programs and found that the European Union's (EU) preference programs have had overall greater success in increasing trade with Africa and that AGOA had more success in increasing diversification in the range of products exported from Africa. Research on China's and India's trade preference programs suggests that their fairly new programs could have significant impacts on SSA trade and that they are among the biggest and fastest growing markets for sub-Saharan exports. As the United States continues to pursue expanded trade and a more two-way trade relationship with African partners at the World Trade Organization (WTO) and elsewhere; SSA countries' recent participation in bilateral and multilateral trade negotiations provides insights that can inform future U.S. negotiations. For example, years of bilateral negotiations between SSA countries and the EU have recently resulted in Economic Partnership Agreements with 32 SSA countries. Trade experts and SSA and EU officials GAO spoke with provided information about these negotiations that indicates that transitioning from non-reciprocal trade preference programs, such as AGOA, to two-way trade agreements like the Economic Partnership Agreements with SSA countries may require: many years to finalize and implement, the establishment of timeframes to end access to trade preference programs, a willingness to consider limiting the initial scope of the agreements, and an acknowledgment that aspects of the agreements may have tradeoffs and could constrain SSA countries' ability to integrate into the global economy. The involvement of SSA countries in recent multilateral negotiations at the WTO also yields important insights for U.S. agencies when negotiating with these countries. SSA and WTO officials told GAO that several Impediments, such as inadequate funding and staffing, can hamper SSA countries' ability to participate fully in multilateral negotiations. However, recent bilateral and multinational funded training and other efforts have helped expand SSA country participation at the WTO. GAO is not making any recommendations in this report. Agencies' technical comments on GAO's draft were incorporated into this report. |
Fiscal year 2002 was a year of challenges, not just for GAO but also for the Congress and the nation. The nation’s vulnerabilities were exposed in a series of events—America’s vulnerability to sophisticated terrorist networks, bioterrorism waged through mechanisms as mundane as the daily mail, and corporate misconduct capable of wiping out jobs, pensions, and investments virtually overnight. As the Congress’s priorities changed to meet these crises, GAO’s challenge was to respond quickly and effectively to our congressional clients’ changing needs. With work already underway across a spectrum of critical policy and performance issues, we had a head start toward meeting the Congress’ needs in a year of unexpected and often tumultuous events. For example, in fiscal year 2002 GAO’s work informed the debate over national preparedness strategy, helping the Congress determine how best to organize and manage major new departments, assess key vulnerabilities to homeland defense, and respond to the events of September 11 in areas such as terrorism insurance and airline security. GAO’s input also was a major factor in shaping the Sarbanes-Oxley Act, which created the Public Company Accounting Oversight Board, as well as new rules to strengthen corporate governance and ensure auditor independence. Further, GAO’s work helped the Congress develop and enact election reform legislation in the form of the Help America Vote Act of 2002 to help restore voter confidence. In fiscal year 2002, GAO also served the Congress and the American people by helping to: Contribute to a national preparedness strategy at the federal, state, and local levels that will make Americans safer from terrorism Protect investors through better oversight of the securities industry and Ensure a safer national food supply Expose the inadequacy of nursing home care Make income tax collection fair, effective, and less painful to taxpayers Strengthen public schools’ accountability for educating children Keep sensitive American technologies out of the wrong hands Protect American armed forces confronting chemical or biological weapons Identify the risks to employees in private pension programs Identify factors causing the shortage of children’s vaccines Assist the postal system in addressing anthrax and various management challenges Identify security risks at ports, airports, and transit systems Save billions by bringing sound business practices to the Department of Foster human capital strategic management to create a capable, effective, Ensure that the armed forces are trained and equipped to meet the nation’s defense commitments Enhance the safety of Americans and foreign nationals at U.S. Assess ways of improving border security through biometric technologies Reduce the international debt problems faced by poor countries Reform the way federal agencies manage their finances Protect government computer systems from security threats Enhance the transition of e-government—the new “electronic connection” between government and the public During fiscal year 2002, GAO’s analyses and recommendations contributed to a wide range of legislation considered by the Congress, as shown in the following table. By year’s end, we had testified 216 times before the Congress, sometimes on as little as 24 hours’ notice, on a range of issues. We had responded to hundreds of urgent requests for information. We had developed 1,950 recommendations for improving the government’s operations, including, for example, those we made to the Secretary of State calling for the development of a governmentwide plan to help other countries combat nuclear smuggling and those we made to the Chairman of the Federal Energy Regulatory Commission calling for his agency to develop an action plan for overseeing competitive energy markets. We also had continued to track the recommendations we had made in past years, checking to see that they had been implemented and, if not, whether we needed to do follow-up work on problem areas. We found, in fact, that 79 percent of the recommendations we had made in fiscal year 1998 had been implemented, a significant step when the work we have done for the Congress becomes a catalyst for creating tangible benefits for the American people. Table 2 highlights, by GAO’s three external strategic goals, examples of issues on which we testified before Congress during fiscal year 2002. Congress and the executive agencies took a wide range of actions in fiscal year 2002 to improve government operations, reduce costs, or better target budget authority based on GAO analyses and recommendations, as highlighted in the following sections. Federal action on GAO’s findings or recommendations produced financial benefits for the American people: a total of $37.7 billion was achieved by making government services more efficient, improving the budgeting and spending of tax dollars, and strengthening the management of federal resources (see fig. 1). For example, increased funding for improved safeguards against fraud and abuse helped the Medicare program to better control improper payments of $8.1 billion over 2 years, and better policies and controls reduced losses from farm loan programs by about $4.8 billion across 5 years. In fiscal year 2002, we also recorded 906 instances in which our work led to improvements in government operations or programs (see fig. 2). For example, by acting on GAO’s findings or recommendations, the federal government has taken important steps toward enhancing aviation safety, improving pediatric drug labeling based on research, better targeting of funds to high-poverty school districts, greater accountability in the federal acquisition process, and more effective delivery of disaster recovery assistance to other nations, among other achievements. As shown in table 3, we met all of our annual performance targets except our timeliness target. While we provided 96 percent of our products to their congressional requesters by the date promised, we missed this measure’s target of 98 percent on-time delivery. The year’s turbulent events played a part in our missing the target, causing us to delay work in progress when higher-priority requests came in from the Congress. We know we will continue to face factors beyond our control as we strive to improve our performance in this area. We believe the agency protocols we are piloting will help clarify aspects of our interactions with the agencies we evaluate and audit and, thus, expedite our work in ways that could improve the timeliness of our final products. We also believe that our continuing investments in human capital and information technology will improve our timeliness while allowing us to maintain our high level of productivity and performance overall. The results of our work were possible, in part, because of changes we have made to maximize the value of GAO. We had already realigned GAO’s structure and resources to better serve the Congress in its legislative, oversight, appropriations, and investigative roles. Over the past year, we cultivated and fostered congressional and agency relations, better refined our strategic and annual planning and reporting processes, and enhanced our information technology infrastructure. We also continued to provide priority attention to our management challenges of human capital, information security, and physical security. Changes we made in each of these areas helped enable us to operate in a constantly changing environment. Over the course of the year, we cultivated and fostered congressional and agency relations in several ways. On October 23, 2001, in response to the anthrax incident on Capitol Hill, we opened our doors to 435 members of the House of Representatives and their staffs. Later in the year, we continued with our traditional hill outreach meetings and completed a 7- month pilot test of a system for obtaining clients’ views on the quality of our testimonies and reports. We also developed agency protocols to provide clearly defined, consistently applied, well-documented, and transparent policies for conducting our work with federal agencies. We have implemented our new reporting product line entitled Highlights—a one-page summary that provides the key findings and recommendations from a GAO engagement. We continued our policy of outreach to our congressional clients, the public, and the press to enhance the accessibility of GAO products. Our external web site now logs about 100,000 visitors each day and more than 1 million GAO products are downloaded every month by our congressional clients, the public, and the press. In light of certain records access challenges during the past few years and with concerns about national and homeland security unusually high at home and abroad, it may become more difficult for us to obtain information from the Executive Branch and report on certain issues. If this were to occur, it would hamper our ability to complete congressional requests in a timely manner. We are updating GAO’s engagement acceptance policies and practices to address this issue and may recommend legislative changes that will help to assure that we have reasonable and appropriate information that we need to conduct our work for the Congress and the country. GAO’s strategic planning process serves as a model for the federal government. Our plan aligns GAO’s resources to meet the needs of the Congress, address emerging challenges and achieve positive results. Following the spirit of the Government Performance and Results Act, we established a process that provides for updates with each new Congress, ongoing analysis of emerging conditions and trends, extensive consultations with congressional clients and outside experts, and assessments of our internal capacities and needs. At the beginning of fiscal year 2002, we updated our strategic plan for serving the Congress based on substantial congressional input—extending the plan’s perspective out to fiscal year 2007 and factoring in developments that had occurred since we first issued it in fiscal year 2000. The updated plan carries forward the four strategic goals we had already established as the organizing principles for a body of work that is as wide- ranging as the interests and concerns of the Congress itself. Using the plan as a blueprint, we lay out the areas in which we expect to conduct research, audits, analyses, and evaluations to meet our clients’ needs, and we allocate the resources we receive from the Congress accordingly. Following is our strategic plan framework. Appendix I of this statement delineates in a bit more detail our strategic objectives and our qualitative performance goals for fiscal years 2002 and 2003. We issued our 2001 Performance and Accountability Report that combines information on our past year’s accomplishments and progress in meeting our strategic goals with our plans for achieving our fiscal year 2003 performance goals. The report earned a Certificate of Excellence in Accountability Reporting from the Association of Government Accountants. We issued our fiscal year 2002 Performance and Accountability Report in January 2003. Our financial statements, which are integral to our performance and accountability, received an unqualified opinion for the sixteenth consecutive year. Furthermore, our external auditors did not identify any material control weaknesses or compliance issues relating to GAO’s operations. During the past year, we acquired new hardware and software and developed user-friendly systems that enhanced our productivity and responsiveness to the Congress and helped meet our initial information technology goals. For example, we replaced aging desktop workstations with notebook computers that provide greater computing power, speed, and mobility. In addition, we upgraded key desktop applications, the Windows desktop operating system, and telecommunications systems to ensure that GAO staff have modern technology tools to assist them in carrying out their work. We also developed new, integrated, user-friendly Web-based systems that eliminate duplicate data entry while ensuring the reusability of existing data. As the Clinger-Cohen Act requires, GAO has an enterprise architecture program in place to guide its information technology planning and decision making. In designing and developing systems, as well as in acquiring technology tools and services, we have applied enterprise architecture principles and concepts to ensure sound information technology investments and the interoperability of systems. Given GAO’s role as a key provider of information and analyses to the Congress, maintaining the right mix of technical knowledge and expertise as well as general analytical skills is vital to achieving our mission. We spend about 80 percent of our resources on our people, but without excellent human capital management, we could still run the risk of being unable to deliver what the Congress and the nation expect from us. At the beginning of my term in early fiscal year 1999, we completed a self- assessment that profiled our human capital workforce and identified a number of serious challenges facing our workforce, including significant issues involving succession planning and imbalances in the structure, shape, and skills of our workforce. As presented below, through a number of strategically planned human capital initiatives over the past few years, we have made significant progress in addressing these issues. For example, as illustrated in figure 3, by the end of fiscal year 2002, we had almost a 60 percent increase in the percentage of staff at the entry-level (Band I) as compared with fiscal year 1998. Also, the proportion of our workforce at the mid-level (Band II) decreased by about 8 percent. Our fiscal year 2002 human capital initiatives included the following: In fiscal year 2002, we hired nearly 430 permanent staff and 140 interns. We also developed and implemented a strategy to place more emphasis on diversity in campus recruiting. In fiscal years 2002 and 2003, to help meet our workforce planning objectives, we offered voluntary early retirement under authority established in our October 2000 human capital legislation. Early retirement was granted to 52 employees in fiscal year 2002 and 24 employees in fiscal year 2003. To retain staff with critical skills and staff with less than 3 years of GAO experience, we implemented legislation authorizing federal agencies to offer student loan repayments in exchange for certain federal service commitments. In fiscal year 2002, GAO implemented a new, modern, effective, and credible performance appraisal system for analysts and specialists, adapted the system for attorneys, and began modifying the system for administrative professional and support staff. We began developing a new core training curriculum for managers and staff to provide additional training on the key competencies required to perform GAO’s work. We also took steps to achieve a fully democratically-elected Employee Advisory Council to work with GAO’s Executive Committee in addressing issues of mutual interest and concern. The above represent just a few of many accomplishments in the human capital area. GAO is the clear leader in the federal government in designating and implementing 21st century human capital policies and practices. We also are taking steps to work with the Congress, the Office of Management and Budget, and the Office of Personnel Management, and others to “help others help themselves” in the human capital area. Ensuring information systems security and disaster recovery systems that allow for continuity of operations is a critical requirement for GAO, particularly in light of the events of September 11 and the anthrax incidents. The risk is that our information could be compromised and that we would be unable to respond to the needs of the Congress in an emergency. In light of this risk and in keeping with our goal of being a model federal agency, we are implementing an information security program consistent with the requirements in the Government Information Security Reform provisions (commonly referred to as “GISRA”) enacted in the Floyd D. Spence National Defense Authorization Act for fiscal year 2001. We have made progress through our efforts to, among other things, implement a risk-based, agencywide security program; provide security training and awareness; and develop and implement an enterprise disaster recovery solution. In the aftermath of the September 11 terrorist attacks and subsequent anthrax incidents, our ability to provide a safe and secure workplace emerged as a challenge for our agency. Protecting our people and our assets is critical to our ability to meet our mission. We devoted additional resources to this area and implemented measures such as reinforcing vehicle and pedestrian entry points, installing an additional x-ray machine, adding more security guards, and reinforcing windows. GAO is requesting budget authority of $473 million for fiscal year 2004 to maintain current operations for serving the Congress as outlined in our strategic plan and to continue initiatives to enhance our human capital, support business processes, and ensure the safety and security of GAO staff, facilities, and information systems. This funding level will allow us to fund up to 3,269 full-time equivalent personnel. Our request includes $466.6 million in direct appropriations and authority to use estimated revenues of $6 million from reimbursable audit work and rental income. Our requested increase of $18.4 million in direct appropriations represents a modest 4.1 percent increase, primarily for mandatory pay and uncontrollable costs. Our budget request also includes savings from nonrecurring fiscal year 2003 investments in fiscal year 2004 that we propose to use to fund further one-time investments in critical areas, such as security and human capital. We have submitted a request for $4.8 million in supplemental fiscal year 2003 funds to allow us to accelerate implementation of important security enhancements. Our fiscal year 2004 budget includes $4.8 million for safety and security needs that are also included in the supplemental. If the requested fiscal year 2003 supplemental funds are provided, our fiscal year 2004 budget could be reduced by $4.8 million. Table 4 presents our fiscal year 2003 and requested fiscal year 2004 resources by funding source. During fiscal year 2004, we plan to sustain our investments in maximizing the productivity of our workforce by continuing to address the key management challenges of human capital, and both information and physical security. We will continue to take steps to “lead by example” within the federal government in connection with these and other critical management areas. Over the next several years, we need to continue to address skill gaps, maximize staff productivity and effectiveness, and reengineer our human capital processes to make them more user-friendly. We plan to address skill gaps by further refining our recruitment and hiring strategies to target gaps identified through our workforce planning efforts, while taking into account the significant percentage of our workforce eligible for retirement. We will continue to take steps to reengineer our human capital systems and practices to increase their efficiency and to take full advantage of technology. We will also ensure that our staff have the needed skills and training to function in this reengineered environment. In addition, we are developing competency-based performance appraisal and broad-banding pay systems for our mission support employees. To ensure our ability to attract, retain, and reward high-quality staff, we plan to devote additional resources to our employee training and development program. We will target resources to continue initiatives to address skill gaps, maximize staff productivity, and increase staff effectiveness by updating our training curriculum to address organizational and technical needs and training new staff. Also, to enhance our recruitment and retention of staff, we will continue to offer a student loan repayment program and transit subsidy benefit established in fiscal year 2002. In addition, we will continue to focus our hiring efforts in fiscal year 2004 on recruiting talented entry-level staff. To build on the human capital flexibilities provided by the Congress in 2000, we plan to recommend legislation that would, among other things, facilitate GAO’s continuing efforts to recruit and retain top talent, develop a more performance-based compensation system, realign our workforce, and facilitate our succession planning and knowledge transfer efforts. In addition, to help attract new recruits, address certain “expectation gaps” within and outside of the government, and better describe the modern audit and evaluation entity GAO has become, we will work with the Congress to explore the possibility of changing the agency’s name while retaining our well-known acronym and global brand name of “GAO.” On the information security front, we need to complete certain key actions to be better able to detect intruders in our systems, identify our users, and recover in the event of a disaster. Among our current efforts and plans for these areas are completing the installation of software that helps us detect intruders on all our internal servers, completing the implementation of a secure user authentication process, and refining the disaster recover plan we developed last year. We will need the Congress’ help to address these remaining challenges. We also are continuing to make the investments necessary to enhance the safety and security of our people, facilities, and other assets for the mutual benefit of GAO and the Congress. With our fiscal year 2003 supplemental funding, if provided, or if not, with fiscal year 2004 funds, we plan to complete installation of our building access control and intrusion detection system and supporting infrastructure, and obtain an offsite facility for use by essential personnel in emergency situations. With the help of the Congress, we plan to implement these projects over the next several years. As a result of the support and resources we have received from this Subcommittee and the Congress over the past several years, we have been able to make a difference in government, not only in terms of financial benefits and improvements in federal programs and operations that have resulted from our work, but also in strengthening and increasing the productivity of GAO, and making a real difference for our country and its citizens. Our budget request for fiscal year 2004 is modest, but necessary to sustain our current operations, continue key human capital and information technology initiatives, and ensure the safety and security of our most valuable asset—our people. We seek your continued support so that we will be able to effectively and efficiently conduct our work on behalf of the Congress and the American people. This appendix lists GAO’s strategic goals and the strategic objectives for each goal. They are part of our updated draft strategic plan (for fiscal years 2002 through 2007). Organized below each strategic objective are its qualitative performance goals. The performance goals lay out the work we plan to do in fiscal years 2002 and 2003 to help achieve our strategic goals and objectives. We will evaluate our performance at the end of fiscal year 2003. Provide Timely, Quality Service to the Congress and the Federal Government to Address Current and Emerging Challenges to the Well- Being and Financial Security of the American People To achieve this goal, we will provide information and recommendations on the following: the Health Care Needs of an Aging and Diverse Population evaluate Medicare reform, financing, and operations; assess trends and issues in private health insurance coverage; assess actions and options for improving the Department of Veterans Affairs’ and the Department of Defense’s (DOD) health care services; evaluate the effectiveness of federal programs to promote and protect the public health; evaluate the effectiveness of federal programs to improve the nation’s preparedness for the public health and medical consequences of bioterrorism; evaluate federal and state program strategies for financing and overseeing chronic and long-term health care; and assess states’ experiences in providing health insurance coverage for low- income populations. the Education and Protection of the Nation’s Children analyze the effectiveness and efficiency of early childhood education and care programs in serving their target populations; assess options for federal programs to effectively address the educational and nutritional needs of elementary and secondary students and their schools; determine the effectiveness and efficiency of child support enforcement and child welfare programs in serving their target populations; and identify opportunities to better manage postsecondary, vocational, and adult education programs and deliver more effective services. the Promotion of Work Opportunities and the Protection of Workers assess the effectiveness of federal efforts to help adults enter the workforce and to assist low-income workers; analyze the impact of programs designed to maintain a skilled workforce and ensure employers have the workers they need; assess the success of various enforcement strategies to protect workers while minimizing employers’ burden in the changing environment of work; and identify ways to improve federal support for people with disabilities. a Secure Retirement for Older Americans assess the implications of various Social Security reform proposals; identify opportunities to foster greater pension coverage, increase personal saving, and ensure adequate and secure retirement income; and identify opportunities to improve the ability of federal agencies to administer and protect workers’ retirement benefits. an Effective System of Justice identify ways to improve federal agencies’ ability to prevent and respond to major crimes, including terrorism; assess the effectiveness of federal programs to control illegal drug use; identify ways to administer the nation’s immigration laws to better secure the nation’s borders and promote appropriate treatment of legal residents; and assess the administrative efficiency and effectiveness of the federal court and prison systems. the Promotion of Viable Communities assess federal economic development assistance and its impact on communities; assess how the federal government can balance the promotion of home ownership with financial risk; assess the effectiveness of federal initiatives to assist small and minority- owned businesses; assess federal efforts to enhance national preparedness and capacity to respond to and recover from natural and man-made disasters; and assess how well federally supported housing programs meet their objectives and affect the well-being of recipient households and communities. Responsible Stewardship of Natural Resources and the Environment assess the nation’s ability to ensure reliable and environmentally sound energy for current and future generations; assess federal strategies for managing land and water resources in a sustainable fashion for multiple uses; assess federal programs’ ability to ensure a plentiful and safe food supply, provide economic security for farmers, and minimize agricultural environmental damage; assess federal pollution prevention and control strategies; and assess efforts to reduce the threats posed by hazardous and nuclear wastes. a Secure and Effective National Physical Infrastructure assess strategies for identifying, evaluating, prioritizing, financing, and implementing integrated solutions to the nation’s infrastructure needs; assess the impact of transportation and telecommunications policies and practices on competition and consumers; assess efforts to improve safety and security in all transportation modes; assess the U.S. Postal Service’s transformation efforts to ensure its viability and accomplish its mission; and assess federal efforts to plan for, acquire, manage, maintain, secure, and dispose of the government’s real property assets. Provide Timely, Quality Service to the Congress and the Federal Government to Respond to Changing Security Threats and the Challenges of Global Interdependence To achieve this goal, we will provide information and recommendations on the following: Respond to Diffuse Threats to National and Global Security analyze the effectiveness of the federal government’s approach to providing for homeland security; assess U.S. efforts to protect computer and telecommunications systems supporting critical infrastructures in business and government; and assess the effectiveness of U.S. and international efforts to prevent the proliferation of nuclear, biological, chemical, and conventional weapons and sensitive technologies. Ensure Military Capabilities and Readiness assess the ability of DOD to maintain adequate readiness levels while addressing the force structure changes needed in the 21st century; assess overall human capital management practices to ensure a high- quality total force; identify ways to improve the economy, efficiency, and effectiveness of DOD’s support infrastructure and business systems and processes; assess the National Nuclear Security Administration’s efforts to maintain a safe and reliable nuclear weapons stockpile; analyze and support DOD’s efforts to improve budget analyses and performance management; assess whether DOD and the services have developed integrated procedures and systems to operate effectively together on the battlefield; and assess the ability of weapon system acquisition programs and processes to achieve desired outcomes. Advance and Protect U.S. International Interests analyze the plans, strategies, costs, and results of the U.S. role in conflict interventions; analyze the effectiveness and management of foreign aid programs and the tools used to carry them out; analyze the costs and implications of changing U.S. strategic interests; evaluate the efficiency and accountability of multilateral organizations and the extent to which they are serving U.S. interests; and assess the strategies and management practices for U.S. foreign affairs functions and activities. Respond to the Impact of Global Market Forces on U.S. Economic and Security Interests analyze how trade agreements and programs serve U.S. interests; improve understanding of the effects of defense industry globalization; assess how the United States can influence improvements in the world financial system; assess the ability of the financial services industry and its regulators to maintain a stable and efficient global financial system; evaluate how prepared financial regulators are to respond to change and innovation; and assess the effectiveness of regulatory programs and policies in ensuring access to financial services and deterring fraud and abuse in financial markets. Help Transform the Government’s Role and How It Does Business to Meet 21st Century Challenges To achieve this goal, we will provide information and recommendations on the following: Analyze the Implications of the Increased Role of Public and Private Parties in Achieving Federal Objectives analyze the modern service-delivery system environment and the complexity and interaction of service-delivery mechanisms; assess how involvement of state and local governments and nongovernmental organizations affect federal program implementation and achievement of national goals; and assess the effectiveness of regulatory administration and reforms in achieving government objectives. Assess the Government’s Human Capital and Other Capacity for Serving the Public identify and facilitate the implementation of human capital practices that will improve federal economy, efficiency, and effectiveness; identify ways to improve the financial management infrastructure capacity to provide useful information to manage for results and costs day to day; assess the government’s capacity to manage information technology to improve performance; assess efforts to manage the collection, use, and dissemination of government information in an era of rapidly changing technology; assess the effectiveness of the Federal Statistical System in providing relevant, reliable, and timely information that meets federal program needs; and identify more businesslike approaches that can be used by federal agencies in acquiring goods and services. Support Congressional Oversight of the Federal Government’s Progress toward Being More Results-Oriented, Accountable, and Relevant to Society’s Needs analyze and support efforts to instill results-oriented management across the government; highlight the federal programs and operations at highest risk and the major performance and management challenges confronting agencies; identify ways to strengthen accountability for the federal government’s assets and operations; promote accountability in the federal acquisition process; assess the management and results of the federal investment in science and technology and the effectiveness of efforts to protect intellectual property; and identify ways to improve the quality of evaluative information. develop new resources and approaches that can be used in measuring performance and progress on the nations 21st century challenges Analyze the Government’s Fiscal Position and Approaches for Financing the Government analyze the long-term fiscal position of the federal government; analyze the structure and information for budgetary choices and explore alternatives for improvement; contribute to congressional deliberations on tax policy; support congressional oversight of the Internal Revenue Service’s modernization and reform efforts; and assess the reliability of financial information on the government’s fiscal position and financing sources. Maximize the Value of GAO by Being a Model Federal Agency and a World-Class Professional Services Organization To achieve this goal, we will do the following: Sharpen GAO’s Focus on Clients’ and Customers’ Requirements continuously update client requirements; develop and implement stakeholder protocols and refine client protocols; and identify and refine customer requirements and measures. | GAO is a key source of objective information and analyses and, as such, plays a crucial role in supporting congressional decision-making and helping improve government for the benefit of the American people. This testimony focuses on GAO's (1) fiscal year 2002 performance and results, (2) efforts to maximize our effectiveness, responsiveness and value, and (3) our budget request for fiscal year 2004 to support the Congress and serve the American public. In fiscal year 2002, GAO's work informed the national debate on a broad spectrum of issues including helping the Congress answer questions about the associated costs and program tradeoffs of the national preparedness strategy, including providing perspectives on how best to organize and manage the new Transportation Security Administration and Department of Homeland Security. GAO's efforts helped the Congress and government leaders achieve $37.7 billion in financial benefits--an $88 return on every dollar invested in GAO. The return on the public's investment in GAO extends beyond dollar savings to improvements in how the government serves its citizens. This includes a range of accomplishments that serve to improve safety, enhance security, protect privacy, and increase the effectiveness of a range of federal programs and activities. The results of our work in fiscal year 2002 were possible, in part, because of changes we have made to transform GAO in order to meet our goal of being a model federal agency and a world-class professional services organization. We had already realigned GAO's structure and resources to better serve the Congress in its legislative, oversight, appropriations, and investigative roles. Over the past year, we cultivated and fostered congressional and agency relations, better refined our strategic and annual planning and reporting processes, and enhanced our information technology infrastructure. We also continued to provide priority attention to our management challenges of human capital, information security, and physical security. We have made progress in addressing each of these challenges, but we still have work to do and plan to ask for legislation to help address some of these issues. GAO is requesting budget authority of $473 million for fiscal year 2004. Our request represents a modest 4.1 percent increase in direct appropriations, primarily for mandatory pay and uncontrollable costs. This budget will allow us to maintain current operations for serving the Congress as outlined in our strategic plan and continue initiatives to enhance our human capital, support business processes, and ensure the safety and security of GAO staff, facilities, and information systems. Approximately $4.8 million, or about 1 percent, of our request relates to several safety and security items that are included in our fiscal year 2003 supplemental request. If this supplemental request is granted, our fiscal year 2004 request could be reduced accordingly. |
In our recent report, we summarize many of the agencies’ financial management system implementation failures that have been previously reported by us and inspectors general (IG). Our work and that of the IGs over the years has shown that agencies have failed to employ accepted best practices in systems development and implementation (commonly referred to as disciplined processes) that can collectively reduce the risk associated with implementing financial management systems. In our report, we identified key causes of failures within several recurring themes, including disciplined processes and human capital management. DHS would be wise to study the lessons learned through other agencies’ costly failures and consider building a strong foundation for successful financial management system implementation, as we will discuss later in our testimony. From our review of over 40 prior reports, we identified weaknesses in the following areas of disciplined processes. Requirements management. Ill-defined or incomplete requirements have been identified by many system developers and program managers as a root cause of system failure. It is critical that requirements—functions the system must be able to perform—be carefully defined and flow from the concept of operations (how the organization’s day-to-day operations are or will be carried out to meet mission needs). In our previous work, we have found agencies with a lack of a concept of operations, vague and ambiguous requirements, and requirements that are not traceable or linked to business processes. Testing. Complete and thorough testing is essential to provide reasonable assurance that new or modified systems will provide the capabilities in the requirements. Testing is the process of executing a program with the intent of finding errors. Because requirements provide the foundation for system testing, they must be complete, clear, and well documented to design and implement an effective testing program. Absent this, an organization is taking a significant risk that substantial defects will not be detected until after the system is implemented. Industry best practices indicate that the sooner a defect is recognized and corrected, the cheaper it is to fix. In our work, we have found flawed test plans, inadequate timing of testing, and ineffective systems testing. Data conversion. In its white paper on financial system data conversion, the Joint Financial Management Improvement Program (JFMIP) identified data conversion as one of the critical tasks necessary to successfully implement a new financial system. JFMIP also noted that if data conversion is done right, the new system has a much greater opportunity for success. On the other hand, converting data incorrectly or entering unreliable data from a legacy system has lengthy and long-term repercussions. The adage “garbage in, garbage out” best describes the adverse impact. Examples of problems we have reported on include agencies that have not properly developed and implemented good data conversion plans, have planned the data conversion too late in the project, and have not reconciled account balances. Risk management. According to leading systems acquisition organizations, risk management is a process for identifying potential problems before they occur and adjusting the acquisition to decrease the chance of their occurrence. Risks should be identified as early as possible and a risk management process should be developed and put in place. Risks should be identified, analyzed, mitigated, and tracked to closure. Effectively managing risks is one way to minimize the chances of project cost, schedule, and performance problems from occurring. We have reported that agencies have not fully implemented effective risk management practices, including shortcomings in identifying and tracking risks. Project management. Effective project management is the process for planning and managing all project-related activities, such as defining how components are interrelated, defining tasks, estimating and obtaining resources, and scheduling activities. Project management allows the performance, cost, and schedule of the overall program to be continually measured, compared with planned objectives, and controlled. We have reported on a number of project management problems including inadequate project management structure, schedule-driven projects, and lack of performance metrics and oversight. Quality assurance. Quality assurance provides independent assessments of whether management process requirements are being followed and whether product standards and requirements are being satisfied. This process includes, among other things, the use of independent verification and validation (IV&V). We and others have reported on problems related to agencies’ use of IV&V including specific functions not being performed by the IV&V, the IV&V contractor not being independent, and IV&V recommendations not being implemented. Inadequate implementation of disciplined processes can manifest itself in many ways when implementing a financial management system. While full deployment has been delayed at some agencies, specific functionality has been delayed or flawed at other agencies. The following examples illustrate some of the recurring problems related to the lack of disciplined processes in implementing financial management systems. In May 2004, we reported significant flaws in requirements management and testing that adversely affected the initial development and implementation of the Army’s Logistics Modernization Program (LMP), in which the Army estimated that it would invest about $1 billion. These flaws also hampered efforts to correct the operational difficulties experienced at the Tobyhanna Army Depot. In June 2005, we reported that the Army had not effectively addressed its requirements management and testing problems, and data conversion weaknesses had hampered the Army’s ability to address the problems that needed to be corrected before the system could be fielded to other locations. The Army lacked reasonable assurance that (1) system problems experienced during the initial deployment and causing the delay of future deployments had been corrected and (2) LMP was capable of providing the promised system functionality. Subsequent deployments of the system have been delayed. We reported in February 2005 that our experience with major systems acquisitions, such as the Office of Personnel Management’s (OPM) Retirement Systems Modernization (RSM) program, has shown that having sound disciplined processes in place increases the likelihood of the acquisitions meeting cost and schedule estimates as well as performance requirements. However, we found that many of the processes in these areas for RSM were not sufficiently developed, were still under development, or were planned for future development. For example, OPM lacked needed processes for developing and managing requirements, planning and managing project activities, managing risks, and providing sound information to investment decision makers. Without these processes in place, RSM was at increased risk of not being developed and delivered on time and within budget and falling short of promised capabilities. In August 2004, the Department of Veterans Affairs (VA) IG reported that the effect of transferring inaccurate data to VA’s new core financial system at a pilot location interrupted patient care and medical center operations. This raised concerns that similar conversion problems would occur at other VA facilities if the conditions identified were not addressed and resolved nationwide prior to roll out. Some of the specific conditions the IG noted were that contracting and monitoring of the project were not adequate, and the deployment of the new system encountered multiple problems, including those related to software testing, data conversion and system interfaces, and project management. As a result of these problems, patient care was interrupted by supply outages and other problems. The inability to provide sterile equipment and needed supplies to the operating room resulted in the cancelation of 81 elective surgeries for a week in both November 2003 and February 2004. In addition, the operating room was forced to operate at two-thirds of its prior capacity. Because of the serious nature of the problems raised with the new system, VA management decided to focus on transitioning back to the previous financial management software at the pilot location and assembled a senior leadership team to examine the results of the pilot and make recommendations to the VA Secretary regarding the future of the system. We are concerned that federal agencies’ human capital problems are eroding the ability of many agencies—and threatening the ability of others—to perform their IT missions economically, efficiently, and effectively. For example, we found that in the 1990s, the initial rounds of downsizing were set in motion without considering the longer-term effects on agencies’ IT performance capacity. Additionally, a number of individual agencies drastically reduced or froze their hiring efforts for extended periods. Consequently, following a decade of downsizing and curtailed investments in human capital, federal agencies currently face skills, knowledge, and experience imbalances, especially in their IT workforces. Without corrective action, these imbalances will worsen, especially in light of the numbers of federal civilian workers becoming eligible to retire in the coming years. In this regard, we are emphasizing the need for additional focus on the following three key elements of human capital management. Strategic workforce planning. Having staff with the appropriate skills is key to achieving financial management improvements, and managing an organization’s employees is essential to achieving results. It is important that agencies incorporate strategic workforce planning by (1) aligning an organization’s human capital program with its current and emerging mission and programmatic goals and (2) developing long-term strategies for acquiring, developing, and retaining an organization’s total workforce to meet the needs of the future. This incorporates a range of activities from identifying and defining roles and responsibilities, to identifying team members, to developing individual competencies that enhance performance. We have reported on agencies without a sufficient human capital strategy or plan, skills gap analysis, or training plans. Human resources. Having sufficient numbers of people on board with the right mix of knowledge and skills can make the difference between success and failure. This is especially true in the IT area, where widespread shortfalls in human capital have contributed to demonstrable shortfalls in agency and program performance. We have found agency projects with significant human resource challenges, including addressing personnel shortages, filling key positions, and developing and retaining staff with the required competencies. Change management. According to leading IT organizations, organizational change management is the process of preparing users for the business process changes that will accompany implementation of a new system. An effective organizational change management process includes project plans and training that prepare users for impacts the new system might have on their roles and responsibilities and a process to manage those changes. We have reported on various problems with agencies’ change management, including transition plans not being developed, business processes not being reengineered, and customization not being limited. The following examples illustrate some of the recurring problems related to human capital management in implementing financial management systems. We first reported in February 2002 that the Internal Revenue Service (IRS) had not defined or implemented an IT human capital strategy for its Business Systems Modernization (BSM) program and recommended that IRS address this weakness. In June 2003, we reported that IRS had made important progress in addressing our recommendation, but had yet to develop a comprehensive multiyear workforce plan. IRS also had not hired, developed, or retained sufficient human capital resources with the required competencies, including technical skills, in specific mission areas. In September 2003, the Treasury Inspector General for Tax Administration reported that IRS’s Modernization and IT Services organization had made significant progress in developing its human capital strategy but had not yet (1) identified and incorporated human capital asset demands for the modernized organization, (2) developed detailed hiring and retention plans, or (3) established a process for reviewing the human capital strategy development and monitoring its implementation. We most recently reported in July 2005 that IRS had taken some steps in the right direction. However, until IRS fully implements its strategy, it will not have all of the necessary IT knowledge and skills to effectively manage the BSM program or to operate modernized systems. Consequently, the risk of BSM program and project cost increases, schedule slippages, and performance problems is increased. We reported, in September 2004, that staff shortages and limited strategic workforce planning resulted in the Department of Health and Human Services (HHS) not having the resources needed to effectively design and operate its new financial management system. HHS had taken the first steps in strategic workforce planning. For example, the Centers for Disease Control and Prevention (CDC), where the first deployment was scheduled, was the only operating division that had prepared a competency report, but a skills gap analysis and training plan for CDC had not been completed. In addition, many government and contractor positions on the implementation project were not filled as planned. While HHS and the systems integrator had taken measures to acquire additional human resources for the implementation of the new financial management system, we concluded that scarce resources could significantly jeopardize the project’s success and lead to several key deliverables being significantly behind schedule. In September 2004, HHS decided to delay its first scheduled deployment at CDC by 6 months in order to address these and other issues. DHS faces unique challenges in attempting to develop integrated financial management systems across the breadth of such a large and diverse department. DHS was established by the Homeland Security Act of 2002, as the 15th Cabinet Executive Branch Department of the United States government. DHS inherited a myriad of redundant financial management systems from 22 diverse agencies along with 180,000 employees, about 100 resource management systems, and 30 reportable conditions identified in prior component financial audits. Of the 30 reportable conditions, 18 were so severe they were considered material weaknesses. Among these weaknesses were insufficient internal controls or processes to reliably report financial information such as revenue, accounts receivable, and accounts payable; significant system security deficiencies; financial systems that required extensive manual processes to prepare financial statements; and incomplete policies and procedures necessary to complete basic financial management activities. DHS received a disclaimer of opinion on its financial statements for fiscal year 2005, and the independent auditors also reported that DHS’s financial management systems did not substantially comply with the requirements of FFMIA. The disclaimer was primarily due to financial reporting problems at five components. The five components include Immigration and Customs Enforcement (ICE), the United States Coast Guard (Coast Guard), State and Local Government Coordination and Preparedness (SLGCP), the Transportation Security Administration (TSA), and Emergency Preparedness and Response (EPR). Further, ICE is an accounting service provider for other DHS components, and it failed to adequately maintain both its own accounting records and those of other DHS components during fiscal year 2005. The auditors’ fiscal year 2005 report discusses 10 material weaknesses, two other reportable conditions in internal control, and instances of noncompliance with seven laws and regulations. Among the 10 material weaknesses were inadequate financial management and oversight at DHS components, primarily ICE and Coast Guard; decentralized financial reporting at the component level; significant general IT and application control weaknesses over critical financial and operational data; and the lack of accurate and timely reconciliation of fund balance with treasury accounts. The results of the auditors’ tests of fiscal year 2005 compliance with certain provisions of laws, regulations, contracts, and grant agreements disclosed instances of noncompliance. The DHS auditors reported instances of noncompliance with 31 U.S.C. § 3512(c),(d), commonly known as the Federal Managers’ Financial Integrity Act of 1982 (FMFIA); the Federal Financial Management Improvement Act of 1996 (FFMIA), Pub. L. No. 104-208, div. A, § 101(f), title VIII, 110 Stat. 3009, 3009-389 (Sept. 30, 1996); the Federal Information Security Management Act of 2002 (FISMA), Pub. L. No. 107-347, title III, 116 Stat. 2899, 2946 (Dec. 17, 2002); the Single Audit Act, as amended (codified at 31 U.S.C. §§ 7501-7507), and other laws and regulations related to OMB Circular No. A-50, Audit Follow-up, as revised (Sept. 29, 1982); the Improper Payments Information Act of 2002, Pub. L. No. 107-300, 116 Stat. 2350 (Nov. 26, 2002); the Department of Homeland Security Financial Accountability Act of 2004, Pub. L. No. 108-330, 118 Stat. 1275 (Oct. 16, 2004); and the Government Performance and Results Act of 1993 (GPRA), Pub. L. No. 103-62, 107 Stat. 285 (Aug. 3, 1993). Although DHS inherited many of the reportable conditions and noncompliance issues discussed above, the department’s top management, including the CFO, is ultimately responsible for ensuring that progress is made in the area of financial management. In August 2003, DHS began the “electronically Managing enterprise resources for government effectiveness and efficiency” (eMerge) program at an estimated cost of $229 million. The eMerge that the acquisition of eMerge was in the early stages and continued focus and follow through, among other things, would be necessary for it to be successful. According to DHS officials, because the project was not meeting its performance goals and timeline, DHS officials began considering whether to continue the project and in Spring 2005 started looking at another strategy. DHS officials told us they decided to change the strategy for its eMergeacquisition and development activities on eMerge had stopped and the blanket purchase agreement with the systems integrator expired. DHS officials added that the eMerge using a shared services approach, which allows its components to choose among three DHS providers of financial management services and the Department of the Treasury’s Bureau of the Public Debt, which was identified by OMB as a governmentwide financial management center of excellence. DHS officials told us that although a departmentwide concept of operations and migration plan were still under development, they expected progress to be made in the next 5 years. As we will discuss later, a departmentwide concept of operations document would help DHS and others understand such items as how DHS will migrate the various entities to these shared service providers and how it will obtain the departmental information necessary to manage the agency from these disparate operations. DHS officials acknowledged that they needed to first address the material weaknesses at the proposed shared service providers before component agencies migrate to them. The key for federal agencies, including DHS, to avoid the long-standing problems that have plagued financial management system improvement efforts is to address the foremost causes of those problems and adopt solutions that reduce the risks associated with these efforts to acceptable levels. Although it appears that DHS will adopt a shared services approach to meet its needs for integrated financial management systems, implementing this approach will be complex and challenging, making the adoption of best practices even more important for this undertaking. Based on industry best practices, we identified four key concepts that will be critical to DHS’s ability to successfully complete its planned migration to shared service providers. Careful consideration of these four concepts, each one building upon the next, will be integral to the success of DHS’s strategy. The four concepts are (1) developing a concept of operations, (2) defining standard business processes, (3) developing a migration strategy for DHS components, and (4) defining and effectively implementing disciplined processes necessary to properly manage the specific projects. We will now highlight the key issues to be considered for each of the four areas. As we discussed previously, a concept of operations defines how an organization’s day-to-day operations are (or will be) carried out to meet mission needs. The concept of operations includes high-level descriptions of information systems, their interrelationships, and information flows. It also describes the operations that must be performed, who must perform them, and where and how the operations will be carried out. Further, it provides the foundation on which requirements definitions and the rest of the systems planning process are built. Normally, a concept of operations document is one of the first documents to be produced during a disciplined development effort and flows from both the vision statement and the enterprise architecture. According to the Institute of Electrical and Electronic Engineers (IEEE) standards, a concept of operations is a user- oriented document that describes the characteristics of a proposed system from the users’ viewpoint. The key elements that should be included in a concept of operations are major system components, interfaces to external systems, and performance characteristics such as speed and volume. Another key element of a concept of operations is a transition strategy that is useful for developing an understanding of how and when changes will occur. Not only is this needed from an investment management point of view, it is a key element in the human capital problems discussed previously that revolved around change management strategies. Describing how to implement DHS’s approach for using shared service providers for its financial management systems, as well as the process that will be used to deactivate legacy systems that will be replaced or interfaced with a new financial management system, are key aspects that need to be addressed in a transition strategy. Key Issues for DHS to Consider What is considered a financial management system? Are all the components using a standard definition? Who will be responsible for developing a DHS-wide concept of operations, and what process will be used to ensure that the resulting document reflects the departmentwide solution rather than individual component agency stove-piped efforts? How will DHS’s concept of operations be linked to its enterprise architecture? How can DHS obtain reliable information on the costs of its financial management systems investments? Business process models provide a way of expressing the procedures, activities, and behaviors needed to accomplish an organization’s mission and are helpful tools to document and understand complex systems. Business processes are the various steps that must be followed to perform a certain activity. For example, the procurement process would start when the agency defines its needs, and issues a solicitation for goods or services, and would continue through contract award, receipt of goods and services, and would end when the vendor properly receives payment. The identification of preferred business processes would be critical for standardization of applications and training and portability of staff. To maximize the success of a new system acquisition, organizations need to consider the redesign of current business processes. As we noted in our Executive Guide: Creating Value Through World-class Financial Management, leading finance organizations have found that productivity gains typically result from more efficient processes, not from simply automating old processes. Moreover, the Clinger-Cohen Act of 1996 requires agencies to analyze the missions of the agency and, based on the analysis, revise mission-related and administrative processes, as appropriate, before making significant investments in IT used to support those missions. Another benefit of what is often called business process modeling is that it generates better system requirements, since the business process models drive the creation of information systems that fit in the organization and will be used by end users. Other benefits include providing a foundation for agency efforts to describe the business processes needed for unique missions, or developing subprocesses to support those at the departmentwide level. Key Issues for DHS to Consider Who will be responsible for developing DHS-wide standard business processes that meet the needs of its component agencies? How will the component agencies be encouraged to adopt new processes, rather than selecting other methods that result in simply automating old ways of doing business? How will the standard business processes be implemented by the shared service providers to provide consistency across DHS? What process will be used to determine and validate the processes needed for DHS agencies that have unique needs? Although DHS has a goal of migrating agencies to a limited number of shared service providers, it has not yet articulated a clear and measurable strategy for achieving this goal. In the context of migrating to shared service providers, critical activities include (1) developing specific criteria for requiring component agencies to migrate to one of the providers rather than attempting to develop and implement their own stove-piped business systems; (2) providing the necessary information for a component agency to make a selection of a shared service provider for financial management; (3) defining and instilling new values, norms, and behaviors within component agencies that support new ways of doing work and overcoming resistance to change; (4) building consensus among customers and stakeholders on specific changes designed to better meet their needs; and (5) planning, testing, and implementing all aspects of the transition from one organizational structure and business process to another. Finally, sustained leadership will be key to a successful strategy for moving DHS components towards consolidated financial management systems. In our Executive Guide: Creating Value Through World-class Financial Management, we found that leading organizations made financial management improvement an entitywide priority by, among other things, providing clear, strong executive leadership. We also reported that making financial management a priority throughout the federal government involves changing the organizational culture of federal agencies. Although the views about how an organization can change its culture can vary considerably, leadership (executive support) is often viewed as the most important factor in successfully making cultural changes. Top management must be totally committed in both words and actions to changing the culture, and this commitment must be sustained and demonstrated to staff. As pressure mounts to do more with less, to increase accountability, and to reduce fraud, waste, abuse, and mismanagement, and efforts to reduce federal spending intensify, sustained and committed leadership will be a key factor in the successful implementation of DHS’s financial management systems. Key Issues for DHS to Consider What guidance will be provided to assist DHS component agencies in adopting a change management strategy that reduces the risks of moving to a shared service provider? What processes will be put in place to ensure that individual component agency financial management system investment decisions focus on the benefits of standard processes and shared service providers? What process will be used to facilitate the decision-making process used by component agencies to select a provider? How will component agencies incorporate strategic workforce planning in the implementation of the shared service provider approach? Once the concept of operations and standard business processes have been defined and a migration strategy is in place, the use of disciplined processes, as discussed previously, will be a critical factor in helping to ensure that the implementation is successful. The key to avoiding long- standing implementation problems is to provide specific guidance to component agencies for financial management system implementations, incorporating the best practices identified by the Software Engineering Institute, the IEEE, the Project Management Institute, and other experts that have been proven to reduce risk in implementing systems. Such guidance should include the various disciplined processes such as requirements management, testing, data conversion and system interfaces, risk and project management, and related activities, which have been problematic in the financial systems implementation projects we and others have reviewed. Disciplined processes have been shown to reduce the risks associated with software development and acquisition efforts to acceptable levels and are fundamental to successful system implementations. The principles of disciplined IT systems development and acquisition apply to shared services implementation, such as that contemplated by DHS. A disciplined software implementation process can maximize the likelihood of achieving the intended results (performance) within established resources (costs) on schedule. For example, disciplined processes should be in place to address the areas of data conversion and interfaces, two of the many critical elements necessary to successfully implement a new system—the lack of which have contributed to the failure of previous agency efforts. Further details on disciplined processes can be found in appendix III of our recently issued report. Key Issues for DHS to Consider How can existing industry standards and best practices be incorporated into DHS- wide guidance related to financial management system implementation efforts, including migrating to shared service providers? What actions will be taken to reduce the risks and costs associated with data conversion and interface efforts? What oversight process will be used to ensure that modernization efforts effectively implement the prescribed policies and procedures? In closing, the best practices we identified are interrelated and interdependent, collectively providing an agency with a better outcome for its system deployment—including cost savings, improved service and product quality, and ultimately, a better return on investment. The predictable result of DHS and other agencies not effectively addressing these best practices is projects that do not meet cost, schedule, and performance objectives. There will never be a 100 percent guarantee that a new system will be fully successful from the outset. However, risk can be managed and reduced to acceptable levels through the use of disciplined processes, which in short represent best practices that have proven their value in the past. We view the application of disciplined processes to be essential for DHS’s systems modernization efforts. Based on industry best practices, the following four concepts would help ensure a sound foundation for developing and implementing a DHS-wide solution for the complex financial management problems it currently faces: (1) developing a concept of operations that expresses DHS’s view of financial management and how that vision will be realized, (2) defining standard business processes, (3) developing an implementation strategy, and (4) defining and effectively implementing applicable disciplined processes. If properly implemented, the best practices discussed here today and in our recently issued report will help reduce the risk associated with a project of this magnitude and importance to an acceptable level. With DHS at an important crossroads in the implementation of the eMergefoundation on which to base its efforts and avoid the problems that have plagued so many other federal agencies faced with the same challenge. Mr. Chairmen, this concludes our prepared statement. We would be happy to respond to any questions you or other Members of the Subcommittees may have at this time. For information about this testimony, please contact McCoy Williams, Director, Financial Management and Assurance, at (202) 512-9095 or at [email protected], or Keith A. Rhodes, Chief Technologist, Applied Research and Methods, who may be reached at (202) 512-6412 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals who made key contributions to this testimony include Kay Daly, Assistant Director; Chris Martin, Senior-Level Technologist; Francine DelVecchio; Mike LaForge; and Chanetta Reed. Numerous other individuals made contributions to the GAO reports cited in this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Over the years, GAO has reported on various agencies' financial management system implementation failures. GAO's recent report (GAO-06-184) discusses some of the most significant problems previously identified with agencies' financial management system modernization efforts. For today's hearing, GAO was asked to provide its perspectives on the importance of the Department of Homeland Security (DHS) following best practices in developing and implementing its new financial management systems and avoiding the mistakes of the past. GAO's testimony (1) discusses the recurring problems identified in agencies' financial management systems development and implementation efforts, (2) points out key financial management system modernization challenges at DHS, and (3) highlights the building blocks that form the foundation for successful financial management system implementation efforts. GAO's work and that of agency inspectors general over the years has shown that agencies have failed to employ accepted best practices in systems development and implementation (commonly referred to as disciplined processes) that can collectively reduce the risk associated with implementing financial management systems. GAO's recent report identified key causes of failures within several recurring themes including (1) disciplined processes, such as requirements management, testing, and project management; and (2) human capital management, such as workforce planning, human resources, and change management. Prior reports have identified costly systems implementation failures attributable to problems in these areas at agencies across the federal government. DHS faces unique challenges in attempting to develop integrated financial management systems across the breadth of such a large and diverse department. DHS inherited a myriad of redundant financial management systems from 22 diverse agencies and about 100 resource management systems. Among the weaknesses identified in prior component financial audits were insufficient internal controls or processes to reliably report financial information such as revenue, accounts receivable, and accounts payable; significant system security deficiencies; financial systems that required extensive manual processes to prepare financial statements; and incomplete policies and procedures necessary for conducting basic financial management activities. In August 2003, DHS began a program to consolidate and integrate DHS financial accounting and reporting systems. DHS officials said they recently decided to develop a new strategy for the planned financial management systems integration program, referred to as eMerge2, because the prior strategy was not meeting its performance goals and timeline. DHS's revised strategy will allow DHS components to choose from an array of existing financial management shared service providers. Based on industry best practices, GAO identified four key concepts that will be critical to DHS's ability to successfully complete its planned migration to shared service providers. Careful consideration of these four concepts, each one building upon the next, will be integral to the success of DHS's strategy. The four concepts are developing a concept of operations, defining standard business processes, developing a strategy for implementing DHS's shared services approach across the department, and defining and effectively implementing disciplined processes necessary to properly manage the specific projects. With DHS at an important crossroads in implementing financial management systems, it has an excellent opportunity to use these building blocks to form a solid foundation on which to base its efforts and avoid the problems that have plagued so many other federal agencies. |
Job Corps was established as a national employment and training program in 1964 to provide severely disadvantaged youth with a wide range of services, including basic/remedial education, vocational training, and social skills instruction, usually at residential facilities. It remains one of the few federally run programs, unlike many other employment training programs that are federally funded but are operated by state or local governments. Job Corps centers are operated by public or private organizations under contract with Labor. Recent legislative proposals to consolidate much of the nation’s job training system into block grants to the states has produced debate on the relationship between Job Corps and the states, including whether responsibility for Job Corps should be delegated to the states. A 1995 Senate-passed bill retained Job Corps as a separate federally administered program; a 1995 House-passed bill was silent about the Job Corps’ future as a separate entity. A conference committee is currently attempting to resolve the differences between the two bills. The Senate bill proposes several changes to better integrate Job Corps with state and local workforce development initiatives, including requiring center operators to submit operating plans to Labor, through their state governors; requiring center operators to give nearby communities advance notice of any center changes that could affect them; and permitting the governor to recommend individuals to serve on panels to select center operators. Labor officials stated that the program is already playing a proactive role in ensuring that the National Job Corps program works more closely with state and local employment, education, and training programs. According to Job Corps officials, the program has received funding to open nine additional centers—five in program year 1996 and four in program year 1997—all of which will be located in states with existing centers. Job Corps’ nine regional directors are responsible for the day-to-day administration of the program at the centers located within their geographic boundaries. Included among their responsibilities are the recruitment of youth for program participation and the assignment of enrollees to one of the program centers. Recruitment is typically carried out by private contractors, the centers, or state employment services under contract with the regional directors. The Job Corps legislation provides some broad guidance with respect to assigning enrollees to centers. It states that participants are to be assigned to the center closest to their residence, except for good cause. Exceptions can include avoiding undue delay in assigning participants to a center, meeting educational or training needs, or ensuring efficiency and economy in the operation of the program. The program currently enrolls participants aged 16 to 24 who are severely disadvantaged, in need of additional education or training, and living in a disruptive environment. Our June 1995 report contained an analysis of characteristics of those terminating from Job Corps in program year 1993 showing that over two-thirds of the program’s participants faced multiple barriers to employment. Enrollments are voluntary, and training programs are open entry, open exit, and self-paced, allowing participants to enroll throughout the year and to progress at their own pace. On average, participants spend about 8 months in the program but can stay up to 2 years. In addition to basic education and vocational training courses, each of the centers provides participants with a range of services including counseling, health care (including dental), room and board, and recreational activities. Skills training is offered in a variety of vocational areas, including business occupations, automotive repair, construction trades, and health occupations. These courses are taught by center staff, private contractors, or instructors provided under contracts with national labor and business organizations. In addition, Job Corps offers, at a limited number of centers, advanced training in various occupations including food service, clerical, and construction trades. This training is designed to provide additional instruction to participants from centers across the nation who have demonstrated the ability to perform at a higher skill level. One feature that makes Job Corps different from other youth training programs is its residential component. About 90 percent of the participants enrolled each year live at the centers, allowing services to be provided 24 hours a day, 7 days a week. The premise for boarding participants is that most come from a disruptive environment and, therefore, can benefit from receiving education and training in a new setting where a variety of support services are available around the clock. Participation in Job Corps can lead to placement in a job or enrollment in further training or education. It can also lead to educational achievements such as earning a high school diploma and gaining reading or math skills. However, the primary outcome for Job Corps participants is employment; about 64 percent of those leaving the program get jobs. Job Corps program capacity differs widely among the states because the number of centers in each state differs, and the size of individual centers within the states varies substantially. Job Corps centers are located in 46 states and the District of Columbia and Puerto Rico (see fig. 1). Among states with centers, the number ranges from one center in each of 19 states; to six centers each in California, Kentucky, and Oregon; to seven in New York State. In-state capacity differs according to the number of centers in each state, the size of individual centers, and the average time participants spend in the program. For example, Kentucky’s centers can serve 6,373 participants annually, nearly double the number that can be served by centers in either California (3,477) or New York (3,252); Idaho has only one center and a capacity of about 200. (See app. IV for a listing of the capacity within each state with a Job Corps center.) As shown in figure 2, Job Corps centers in 9 states had the capacity to serve over 2,000 Job Corps participants annually, whereas centers in 10 states could serve fewer than 500 participants annually. Nationwide, 41 percent of the approximately 64,000 program year 1994 Job Corps participants (about 44 percent in program year 1993) who lived in states with Job Corps centers were assigned to centers outside their home state. Openings at centers located in their states of residence were often filled by participants from other states. Those participants assigned out of state travel greater distances than those who are assigned to an in-state center. Yet, even when assigned out of state, participants tend to stay within the Labor region in which they reside. Regardless of where they are assigned, participants tend to be employed in their state of residence. Considerable variation existed among the states in the extent to which Job Corps participants were assigned to out-of-state centers (see fig. 3). In program year 1994, the majority of Job Corps participants from 15 states were assigned to centers outside their home state. For example, more than three-quarters of the Job Corps participants from Colorado, Illinois, South Carolina, and Wisconsin were assigned to centers in states other than the one in which they lived. On the other hand, less than a quarter of the youths in 16 states were assigned to out-of-state Job Corps centers. For example, less than 15 percent of the Job Corps participants from Minnesota, Nevada, New Jersey, and New York were assigned to centers outside their home state. (App. V lists the states included in each of the percentage groupings shown in fig. 3.) Percentage of Participants Assigned Out of State While substantial numbers of participants are assigned to out-of-state centers, the vast majority of all participants are assigned to centers within the Job Corps regions in which they reside. Nearly 95 percent of program year 1994 participants (92 percent in program year 1993) were assigned to a Job Corps center that was located in the same region as their residence. In 7 of Labor’s 10 regions, over 90 percent of Job Corps program participants were residents of the regions in which they were assigned, and in the remaining 3 regions, over 80 percent were regional residents. A portion of the remaining 5 percent who were transferred outside their region were assigned under agreements between regional directors to send participants to centers in other regions. For example, the director in region II said that he has an agreement to send approximately 150 youths to region I and 250 youths to region IV. The director in region IX assigns 400 to 600 youths to the Clearfield, Utah, center in region VIII and another 200 youths to region X. Job Corps participants assigned to centers outside their state of residence were sent to centers that were, on average, over 4 times as distant as the in-state center closest to a participant’s residence. For the approximately 26,000 youths leaving the program in program year 1994 who were assigned to out-of-state Job Corps centers, we compared the distances from their home to (1) the center to which they were assigned and (2) the in-state center nearest their residence. In 92 percent of the cases where participants were assigned out of state, there was an in-state Job Corps center closer to the participant’s home. On average, participants assigned to out-of-state centers traveled about 390 miles, whereas the closest in-state center was about 90 miles from their residence. For example, about 2,200 Florida residents were assigned to Job Corps centers in other states, traveling on average about 640 miles to attend those centers. In contrast, these participants would have traveled, on average, only about 70 miles had they been assigned to the nearest Florida center. We noted that while residents in many states were being assigned to out-of-state centers, a substantial number of nonresidents were being brought in and enrolled at in-state centers. For example, in program year 1994, of the approximately 1,000 Arkansas residents in Job Corps, about 600 (or 60 percent) were assigned to out-of-state centers. Yet, about 600 nonresidents were brought in to centers in Arkansas from other states. Similarly, in Georgia, 1,300 residents from that state were assigned to Job Corps centers located elsewhere, whereas about 1,900 individuals residing in other states were brought in to centers located in Georgia. Figure 4 shows states with large numbers (500 or more) of residents sent to out-of-state centers while large numbers of nonresidents were brought in-state. (App. VI provides, for each state, the number of nonresidents brought in from other states, as well as the number of residents sent to out-of-state centers, for program years 1994 and 1993.) Assigning participants to Job Corps centers outside their state of residence resulted in wide variations in the number of nonresidents at individual Job Corps centers nationwide. The majority of participants served at about one-third of the centers were out-of-state residents. Overall, we found that in 38 of the 113 Job Corps centers operating in program year 1994, 50 percent or more of the participants resided outside the state in which the center was located (see fig. 5). Fifteen centers had 75 percent or more nonresidents enrolled during program year 1994, and the 9 centers with the most nonresidents (85 percent or more) were located in Kentucky (6 centers), California (1), Utah (1), and West Virginia (1). Because program capacity in Kentucky, Utah, and West Virginia exceeded in-state demand, large numbers of nonresidents attended centers in these states. California, on the other hand, had insufficient capacity. Nonetheless, the number of nonresidents at the California center may have been high because it provided advanced training for participants who previously had completed some basic level of training at centers across the nation. Forty-seven centers had less than 25 percent nonresidents enrolled, including 30 centers with less than 10 percent of their program participants coming from out of state. Regardless of where Job Corps participants were assigned, those who found jobs usually did so in their home state. Of the approximately 42,000 Job Corps participants who obtained jobs after leaving the program in 1994, about 83 percent found jobs in their state of residence (85 percent in program year 1993). Even those participants who were assigned to Job Corps centers outside their state of residence generally returned to their home states for employment. Specifically, of the 18,200 participants obtaining jobs after being trained in centers outside their state of residence, about 13,700 (75 percent) obtained those jobs in their home state (see fig. 6). Regional officials stated that substantial numbers of participants were assigned to centers out of state due, in part, to Labor’s desire to fully utilize centers. The other principal reason given was to satisfy participant preferences either to be assigned to a specific center or to be enrolled in a specific occupational training course. According to Labor officials, full utilization of Job Corps centers was one of the principal reasons for assigning participants out of state. The Job Corps program does not routinely collect the reasons for out-of-state assignments and, therefore, we were unable to document the specific factors behind these decisions. However, we contacted Labor officials, including each of its nine regional directors—who are ultimately responsible for center assignments—as well as contractors responsible for 15 outreach/screening contracts, to determine what factors contributed to out-of-state assignments. For the most part, these officials stated that one of the reasons for not assigning participants to the center closest to their residence and, instead, to out-of-state centers was to ensure that centers were fully utilized. For example, they pointed out that many residents from Florida were assigned to centers in Kentucky; otherwise, centers in Kentucky would remain underutilized. A similar situation was cited with respect to participants from California assigned to a center in Utah that would otherwise be underutilized. In addition, Labor officials noted that participants were assigned to out-of-state centers to fill openings that occurred throughout the year because participants continuously leave the program due to the program’s open-entry, open-exit, self-paced format. Moreover, at any point, there may not be any state residents ready to enroll in the program. Maintaining full capacity in Job Corps centers is one measurement Labor uses in evaluating regional director performance; Labor data indicate that, except for a portion of program year 1994, the program has operated near full capacity during the previous 3 program years. Vacancies can frequently occur at Job Corps centers because of the uneven distribution of program capacity in relation to demand for services, the continuous turnover of participants at individual centers, and the irregular flow of participants into the program. Labor officials said that in program year 1994, Job Corps had an average occupancy rate of about 91 percent programwide. Average occupancy rates at the regional level, in program year 1994, ranged from about 83 percent to 97 percent. We found less evidence to support the other principal reason cited for assigning participants to distant centers—the need to satisfy participant preferences, either to attend a particular center or to receive training in a particular occupation. While the Job Corps data system does not provide information on the extent to which such preferences are considered when making assignments, we were able to gain some insight into the degree to which specific vocational offerings might explain out-of-state assignments. We analyzed the occupational training courses in which out-of-state participants were enrolled. We found that over two-thirds of these individuals were either enrolled in occupational courses commonly offered throughout the national network of Job Corps centers or were never enrolled in an occupational course at all. For example, about 13 percent of the participants sent to out-of-state centers were being trained in clerical positions (available at 91 centers), about 8 percent in food service (available at 94 centers), and 8 percent in health occupations (available at 72 centers). In addition, about 11 percent received no specific vocational offering after being assigned to an out-of-state center (see table 1). Thus, specialized training or uncommon occupational offerings do not appear to explain these out-of-state assignments. We were, however, unable to determine whether a training slot in the requested vocational area was available at the closest center when participants were assigned out of state. During our discussions with regional Job Corps officials, some said that they have recently begun to focus more on assigning participants to Job Corps centers that are located in the same state in which they reside. Region III officials incorporate in-state assignment goals into their outreach and screening contracts, and a March 1995 regional field instruction states that the region’s center assignment plan “now places greater emphasis on the assignment of youth to centers within their own state, or to centers within a closer geographical area.” Similarly, other regional officials told us that they are now placing greater emphasis on in-state assignment of youth because of increased congressional interest in having greater state involvement in the program. During program year 1994, the majority of states with Job Corps centers had sufficient capacity to handle virtually all the in-state demand (at least 90 percent of in-state participants) for Job Corps training, but this ability varied substantially among the states. We compared the demand for Job Corps services within each state with the total capacity of the centers located therein. We measured state demand in terms of the number of residents who participated in Job Corps, regardless of whether they attended a center within their state of residence or out of state. Nationwide, 52,000 of the 64,000 Job Corps participants—81 percent (86 percent in program year 1993)—either were or could have been trained in centers in their home state. As shown in figure 7, a total of 27 states had sufficient capacity in their Job Corps centers to accommodate virtually all the program participants from those states, and another 12 states could meet at least 70 percent of the demand. (App. VII lists the states in each of the percentage groupings shown in fig. 7.) We found substantial differences among states in the capacity of in-state centers to serve Job Corps participants from their state. For example, South Carolina had over 1,600 residents participating in Job Corps, but the centers in that state had the capacity to serve only about 440 participants. On the other hand, Kentucky had 485 residents in Job Corps, but had the capacity (6,373) to serve about 13 times that number of participants. Although 81 percent of Job Corps participants in program year 1994 either were or could have been served in their state of residence, the remaining 19 percent (over 11,000 youths) lived in states whose centers lacked the capacity to serve all state residents enrolled in Job Corps. For example, centers in California, Florida, Louisiana, and South Carolina each would have been unable to serve over 1,000 Job Corps participants in program year 1994 in their existing centers. Figure 8 shows (for those states where demand was higher than in-state capacity) the states with Job Corps centers that had a demand that exceeded capacity by 500 or more participants. In addition, five states (Connecticut, Delaware, New Hampshire, Rhode Island, and Wyoming) did not have a Job Corps center in program year 1994. These states accounted for about another 1,400 participants who could not be served in their home state. On the other hand, the capacity in eight states was more than double the number of youths from their states in Job Corps. For example, Utah’s two centers could accommodate about 2,400 youths, but only about 700 state residents were in the program. Similarly, West Virginia’s centers had a capacity for about 1,100 youths, yet only about 300 West Virginia youths enrolled in Job Corps (see fig. 9). The Job Corps program’s plan to establish nine new centers over the next 2 years will provide some additional capacity that is needed in states with existing centers, but will increase capacity in three other states to about twice the in-state demand. In addition, a center opened in Connecticut (which had been without a Job Corps center) in May 1996 that will serve about 300 annually. Overall, this expansion will enable the program to serve an additional 4,000 youths in those states that had insufficient capacity. For example, planned centers in Alabama, California, Florida, Illinois, and Tennessee will help those states address the shortage of available training opportunities for in-state residents, reducing the shortfall in those states from about 4,700 to 700. However, Job Corps is also planning to add centers in Maine, Massachusetts, and Michigan, providing these states with the capacity to serve nearly twice the number of state residents participating in Job Corps. In commenting on a draft of this report, Labor expressed some concerns with our presentation of certain information that it believed needed greater emphasis and with what it believed were factors we should have considered in carrying out our analysis. For example, Labor said that our characterization of in-state demand was misleading. Furthermore, it said that we did not recognize the limited availability of advanced training and its impact when calculating distance for participants assigned out of state. We have clarified our definition of demand as used in this report and recalculated distance, excluding advanced training participants, which had no impact on our finding. Labor also pointed out recent changes in program emphasis and provided some technical clarification. Labor’s comments, along with our responses, are printed in appendix IX. We are sending copies of this report to the Secretary of Labor; the Director, Office of Management and Budget; relevant congressional committees; and other interested parties. Copies will be made available to others on request. If you or your staff have any questions concerning this report, please call me at (202) 512-7014 or Sigurd Nilsen at (202) 512-7003. Major contributors to this report include Dianne Murphy Blank, Jeremiah Donoghue, Thomas Medvetz, Arthur Merriam, and Wayne Sylvia. We designed our study to gather information on how Job Corps is currently operating in terms of where participants are recruited, trained, and placed. To do so, we analyzed Labor’s Job Corps participant data file and interviewed Job Corps officials and recruiting contractors. To analyze where Job Corps participants are recruited from, assigned for training, and placed in jobs, we used Labor’s Student Pay, Allotment and Management Information System (SPAMIS). Among other things, the database contains information on the placement and screening contractor for each participant. We analyzed data on Job Corps participants who left the program during program year 1994 (July 1, 1994, through June 30, 1995), the most recent full year for which data were available. To help determine whether program year 1994 was a unique year with regard to participant assignment, we performed similar analyses on comparable data for program year 1993. Unless otherwise stated, however, all numbers cited in the report reflect program year 1994 data. Our basic population consisted of all participants who left the program during program year 1994 from 113 Job Corps centers. There were 66,022 participants included in this population. Two Job Corps centers have since closed, but participants from these centers were included in our analysis. This basic population was used for the analysis of capacity and average length of stay. We eliminated participant files with missing information or for participants who resided in Puerto Rico or outside the United States. We also eliminated from our analyses those participants from states without Job Corps centers. This brought our analytic population to 64,060. Certain analyses dealt with subpopulations of the basic population. For example, for the analysis of where participants obtained jobs, only those 41,975 cases where the file indicated a job placement were used. For program year 1993, the file indicated that 35,116 participants obtained jobs. To determine how far participants traveled when attending out-of-state centers, we calculated the straight-line distance from the participant’s residence to the last assigned out-of-state center. The distance was calculated using the centroid—or center—for the zip code of the participants’ residence at entry and for the Job Corps center attended. The 5-Digit Zip Code Inventory File—part of the Statistical Analysis System library—provided the centroid’s latitude and longitude. These latitude and longitude measures became the basis for the distance computations. To determine whether an in-state center was closer, we calculated the straight-line distance from the participant’s residence to the nearest Job Corps center located in the participant’s state of residence. We then compared this distance with the distance to the Job Corps center of assignment. Our distance analysis was dependent upon having consistent address and zip code information for the participants’ residences and Job Corps centers, and the related longitude and latitude for those zip codes. Longitude and latitude data for locations outside the 50 states were not available. Thus, 989 program year 1994 participants from Puerto Rico were not included in the analysis. Another 680 participants were excluded from the analysis because either their zip code was not consistent with the state of residence information or they were missing state or zip code information. Because our focus for this analysis was on participants who lived in a state with a Job Corps center, we also excluded 1,434 participants who came from states that did not have Job Corps centers; these participants had to be assigned to out-of-state centers. This brought the total of the population for this analysis to 62,391 in program year 1994. This includes all participants regardless of the type of training program in which they participated. Table I.1 presents a summary of the subgroup sizes for analyses performed on program years 1994 and 1993 data. Excluded participant files for missing information (61) (337) Excluded participants not residing in United States, District of Columbia, or Puerto Rico (467) (444) Total terminees in our population Total terminees in states without Job Corps centers (1,434) (1,670) Total terminees in states with Job Corps centers Excluded participant files with longitude and latitude data unavailable (989) (940) Excluded participant files with inconsistent or missing zip code data (680) (422) To calculate the program year 1994 capacity of each Job Corps center, we used Labor’s listing of residential and nonresidential capacity at any one time (slots) for each Job Corps center and multiplied it by the average number of days in a year (365.25 days). We then divided that number by the average length of stay of program year 1994 terminees at that center. For example, the Carl D. Perkins Job Corps Center in Prestonsburg, Kentucky, had a stated capacity of 245 slots and a program year 1994 average length of stay of 236.56 days. We calculated the yearly capacity of the Perkins’ Center at 378 participants (245 times 365.25 divided by 236.56). On this basis, we performed center-by-center calculations and aggregated them to the state level to estimate a yearly capacity by state. To estimate in-state demand, we used all program participants from that state, regardless of where they were assigned, as a proxy measure. We recognize that this does not reflect total program demand, which would also include those who are eligible and interested in Job Corps but had not yet enrolled in the program. To obtain information on the process the Job Corps program uses to assign participants to centers, we interviewed Labor officials in the nine regional offices, as well as at headquarters. Using a semistructured interview protocol, we asked questions related to how participants are assigned to Job Corps centers, including the program’s policies and procedures for participant assignments, the responsibilities and documentation requirements for each level of oversight, and the assignment patterns for participants within the regions. Additionally, we asked questions based on the analysis of program year 1993 assignment information (because program year 1994 data were not yet available at the time) that showed the extent to which participants were assigned out of state and out of region. Each official was also asked to comment on the current assignment patterns for participants within their regions. To obtain additional information on the Job Corps participant assignment process, we interviewed a sample of contractors responsible for 15 recruiting contracts. Using the program year 1993 assignment data contained in SPAMIS, we selected the top 16 large-scale recruiting contracts—defined as those that assigned over 300 participants to Job Corps centers—with the highest proportion of participants who were sent out of state. For contrast, we also chose three other recruiting contracts from the same locations that had relatively few out-of-state assignments. Each contractor was interviewed by telephone using a semistructured interview protocol that included questions relating to the Job Corps’ participant assignment process. Specifically, we asked about the status of their recruiting contract(s) and their responsibilities and reporting requirements. We also asked the recruiting contractors to identify those factors that had the most impact on their decision on where to assign a participant. Some of the contractors were no longer under contract, and others could not be reached. As a result, we interviewed contractors responsible for 13 contracts that had a large proportion of participants recruited for out-of-state centers and 2 contracts that had relatively fewer participants going out of state. While our questions were based on the analysis of program year 1993 assignment information, we also asked each recruiting contractor to comment on his or her current student assignment patterns. We selected recruiting contractors to interview on the basis of their assignment of participants to centers outside participants’ states of residence. This selection process was not random and, therefore, the results reported cannot be generalized to recruiting contractors overall. Our distance analysis was based upon zip code centroid and is intended to provide a gross measure of distance. Actual travel distances may vary. The average length of stay of participants at Job Corps centers can show some variation from year to year, as would the estimated center capacity when calculated from this number. To illustrate these variations, we have presented program year 1993 data alongside data for program year 1994 (see app. II). While we did not verify the accuracy of the SPAMIS data provided by Labor, we did check the consistency of participants’ zip code and state of residence data and eliminated those files with inconsistent information. We also compared the results from our analyses of program year 1994 data with those from program year 1993 for consistency at the national, regional, and state levels. Percentage assigned to centers in home state Percentage sent to centers in other states Percentage of state residents assigned to out-of-state centers Number of states assigning 0-24 percent of state residents out of state Number of states assigning 25-49 percent of state residents out of state Number of states assigning 50-74 percent of state residents out of state Number of states assigning 75%+ state residents out of state Percentage of Job Corps participants assigned to centers in same region as residence Average distance traveled (in miles) by participants assigned to out-of-state centers Average distance (in miles) to nearest in-state center for those participants assigned to out-of-state centers Percentage of center participants from out of state Number of centers having 0-24 percent of participants from out of state Number of centers having 25-49 percent of participants from out of state Number of centers having 50-74 percent of participants from out of state Number of centers having 75+ percent participants from out of state Number of participants obtaining jobs Number of participants obtaining jobs in home state Percentage obtaining jobs in home state Number of participants that were or could have been trained in state (continued) Number of participants who were Brought in from other states (continued) Number of participants who were Brought in from other states Number of participants who were Brought in from other states (continued) Number of participants who were Brought in from other states The centers in Alaska and North Dakota (one in each state) were not fully operational in program year 1993. The following are GAO’s comments on the Department of Labor’s letter dated June 3, 1996. 1. The legislative language relating to the assignment of enrollees to Job Corps centers is included in the Background section of the report. 2. We have modified our report to note that the Job Corps regional operations are carried out under the direction of nine regional managers. 3. We agree that participants transferring into advanced training may be required to travel additional miles to attend this training. To respond to Labor’s comments, we attempted to identify all the participants included in our analysis who transferred into advanced training courses. We were able to identify all participants who transferred from the original center to which they were assigned, regardless of the reason for transfer, but the information was not available to identify those specifically transferring to advanced training programs. Nonetheless, eliminating from our analysis the over 1,800 participants who transferred between centers did not change our findings. The average distance traveled by participants assigned to out-of-state centers was 375 miles, compared with about 390 miles when including the over 1,800; the distance to the nearest in-state center remained the same—93 miles. Thus, our finding—that participants assigned to centers outside their state of residence were sent to centers that were, on average, over 4 times as far as the closest in-state center—is unchanged. 4. We have modified our report, where appropriate, to indicate that our use of the term “demand” is limited to only those enrolling in Job Corps and that it does not include those who are eligible and interested in the program but have not yet enrolled. 5. Our report provides a separate section with a caption that highlights that program participants are employed in their state of residence. 6. We have clarified our report to recognize that the high number of nonresidents in the California center cited may have been due to the nature of the training offered, that is, the center provided advanced training to participants from across the nation. 7. The reasons for assigning participants to out-of-state centers cited in our report are based on comments by those involved in deciding where enrollees are actually assigned—the nine regional directors and several outreach/screening contractors. The principal reasons cited were to fully use available space at the centers and to satisfy participants’ preferences either to attend a specific center or to enroll in a specific occupational training course. 8. As suggested, we have included a statement in the Results in Brief section that recognizes our inability to determine whether specific vocational training slots were available at the closest center when participants were enrolled. 9. We have included a statement on page 4 of our report to recognize Job Corps’ proactive role in ensuring that the program works more closely with state and local agencies. Job Corps: Comparison of Federal Program With State Youth Training Initiatives (GAO/HEHS-96-92, Mar. 28, 1996). Job Corps Program (GAO/HEHS-96-61R, Nov. 9, 1995). Job Corps: High Costs and Mixed Results Raise Questions About Program’s Effectiveness (GAO/HEHS-95-180, June 30, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the: (1) locations of Job Corps centers and their capacity by state; (2) extent to which Job Corps participants are trained and placed in jobs in the state in which they reside; and (3) reasons why participants are sent to centers outside their state of residence. GAO found that: (1) Job Corps program capacity differs among states because the number of centers in each state differs and the size of individual centers within each state differs; (2) in 1994, 41 percent of the 64,000 participants who lived in states with Job Corps centers were assigned to centers outside their home state; (3) the extent of out-of-state assignments varied among states; (4) participants assigned to centers outside their home state were sent to centers that were, on average, over 4 times as distant as the closest in-state center; (5) in many states, Job Corps residents were sent to out-of-state centers, while nonresidents were enrolled at in-state centers; (6) the number of nonresidents varied among individual Job Corps centers during 1994; (7) regardless of where participants were assigned, those who found jobs usually did so in their home state; (8) participants were assigned to centers outside their home state to fully utilize centers or to satisfy particular vocational preferences; (9) the recent trend has been to assign program residents to in-state centers; (10) in 1994, most in-state Job Corps centers had sufficient capacity to accommodate almost all in-state Job Corps participants; and (11) the nine new centers will provide some needed additional capacity in some states and increase capacity in three states to about twice the in-state demand. |
To describe the nature and purpose of the federal assistance provided to the auto industry, we reviewed Department of the Treasury documents related to AIFP—including white papers on the Supplier Support Program and the Warranty Commitment Program, terms and conditions of the loans provided to Chrysler and GM, and disbursement reports on the amount of funding allocated and disbursed under the AIFP. We also interviewed Treasury officials to obtain further information and clarification on these programs. To identify how the federal assistance to the auto industry addresses our three principles for government assistance, we obtained and reviewed program information and loan documentation from Treasury to identify the goals and objectives of the assistance and the problems the assistance was intended to address. We reviewed the terms and conditions of the loan agreements to determine mechanisms in place to protect taxpayers from excessive or unnecessary risks and compared these mechanisms to the principles we have previously identified for providing financial assistance to large firms. We also obtained and reviewed financial information of the automakers to ascertain the automakers’ financial position. We reviewed the reports that GM and Chrysler periodically submitted to Treasury, as required by the loan terms, and interviewed Treasury officials about their reviews of these reports. We conducted interviews with Treasury about the loan program and agreements to identify the procedures established to oversee, monitor, and enforce the terms and conditions of the loan agreements. We also conducted interviews with officials from the Departments of Energy and Transportation to obtain information on their coordination with Treasury in providing and overseeing assistance to automakers; representatives from Chrysler, GM, Chrysler Financial Services Americas LLC (Chrysler Financial) and GMAC LLC (GMAC) to obtain information on how they determined the level of funding needed and their plans for using the funding; and representatives from Ford Motor Company and Ford Motor Credit Company to determine why they have not sought federal assistance. To identify important factors for Chrysler and GM to address to achieve long-term viability and the challenges they face to become viable, we contracted with the National Academy of Sciences (NAS) to identify a diverse group of individuals with expertise about the past and current financial condition and operations of the domestic automakers, the restructuring of distressed companies, labor relations issues, financial management and analysis of distressed or restructuring companies, factors influencing competitiveness in the auto industry, and engine and vehicle technologies that may affect the auto manufacturing industry today as well as in the near future. We selected a panel of 17 individuals from among those NAS identified based on achieving a variety of expertise and avoiding any potential conflicts of interest. We conducted individual semi-structured interviews with the panelists to identify factors influencing the current condition of the auto industry; factors affecting future viability; obstacles to achieving long-term viability; and elements that, according to members of our panel, if contained in the plans, would positively or negatively influence the potential for successful restructuring and future viability. (Appendix I lists the panel of individuals whom we interviewed.) We used a content analysis to systematically analyze transcripts of these interviews to identify principal themes that emerged from the interviews. We also reviewed comments on the content of the restructuring plans that panelists provided to us once the plans had been submitted. We compared the content of the automakers’ restructuring plans to the criteria identified by our panel and the requirements in the loan agreements. To further identify challenges to achieving long-term viability, we reviewed Treasury’s assessment of the restructuring plans Chrysler and GM submitted in February. The views expressed by the members of our panel should be interpreted in the context of the following qualifications. Although we were able to secure the participation of a balanced, highly qualified group of individuals, other individuals with expertise in relevant fields could not be included because of the need to limit the number of interviews conducted. Although many points of view were represented, the panel was not representative of all potential views. Nevertheless, the members of our panel provided rich information on the current state and future of the auto industry and insightful comments. To provide additional information and context on all issues examined in this report, we conducted interviews with other stakeholders, including a representative of the International Union, United Automobile, Aerospace and Agricultural Implement Workers of America (UAW), representatives of the Association of International Automobile Manufacturers, and other knowledgeable individuals including financial analysts specializing in the auto sector, a lawyer knowledgeable about state franchise laws, and an economist specializing in labor issues. To ensure the accuracy and completeness of the information contained in the report, we asked representatives of Chrysler, Ford, GM, the UAW, and the Pension Benefit Guaranty Corporation (PBGC), and two members of our panel to review portions of a draft of this report. We also provided Chrysler and GM with the opportunity to review the complete draft and discuss their comments with us. They offered some technical corrections and clarifications, which we incorporated as appropriate. We conducted this performance audit from January 2009 to April 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings, based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings based on our audit objectives. GM, a publicly traded company, was incorporated in 1916 and employs about 240,000 people worldwide. It has manufacturing facilities in 34 countries and sells more than a dozen brands of vehicles in about 140 countries. Chrysler is a privately held company that was established 9 years later, in 1925, and employs about 54,000 people worldwide, including at manufacturing facilities in 4 countries and vehicles assembled under contract in 4 others. Chrysler and GM reported losses in 2008 totaling $8 billion and $31 billion, respectively, and there are significant concerns about the future of both companies. For instance, in GM’s 2008 audit report, its independent registered public accountant raised “substantial doubt” about GM’s ability to continue as a going concern due to “recurring losses from operations, stockholders’ deficit, and inability to generate sufficient cash flow to meet its obligations.” In addition, Chrysler stated in its restructuring plan that additional federal funds will be needed this spring to prevent the company from having to file for bankruptcy. The automakers themselves are not alone in suffering the effects of declining automotive sales and revenues. The economic reach of the auto industry in the United States is broad, with many groups affected by its downturn and the financial condition of the automakers. Some key groups include the following. Autoworkers: At the end of 2007, Chrysler, Ford, and GM employed about 240,000 hourly and salaried workers in the United States. Thousands of workers have been laid off, retired, or taken buyouts in the past months as the automakers seek to cut their costs and excess production capacity. Most hourly workers are represented by the UAW, which is in discussions with Chrysler and GM to modify existing labor agreements to achieve cost reductions. Suppliers: More than 500,000 workers are employed by companies in the United States that manufacture parts and components used by automakers—both domestic automakers and transplants. According to the Motor and Equipment Manufacturers Association, many suppliers are in severe financial distress, with a number having filed for bankruptcy in 2008. Some members of our panel said that because many of these suppliers have relatively high costs and depend on the business of the Detroit 3, some of them may not have enough revenue to survive if one of the automakers were to cease production. This, in turn, could affect the automakers’ ability to obtain parts needed to manufacture vehicles. This dynamic has the potential to affect all automakers with production facilities in the United States, regardless of home country. States and localities in auto manufacturing regions: The automotive manufacturing industry, including the Detroit 3, transplant automakers, and suppliers, is concentrated in certain states in the Midwest and South. For instance, in Michigan, 28 percent of manufacturing jobs are in the automotive sector, as of March 2008. Other states with a high proportion of jobs in this sector include Kentucky (19 percent), Indiana (14 percent), Ohio (13 percent), Alabama (10 percent), and Tennessee (9 percent). A December 2008 Brookings Institution report identified 50 metropolitan areas, clustered primarily in the Midwest and South, that rely heavily on Detroit 3-related jobs. Although any loss of output due to the difficulties of the auto industry could be felt nationwide, the geographic concentration of the industry means certain regions will be harder hit than others as residents in these regions lose their jobs and the tax base shrinks. Automaker retirees: About 600,000 individuals currently receive pension payments from Chrysler and GM. Due to the retirement benefits— including pensions and healthcare—provided to autoworkers, established and enhanced through several decades of collective bargaining, the Detroit 3 are facing a significant financial commitment. In an effort to reduce costs and become more competitive with the transplants, the Detroit 3 in 2007 reached an agreement with UAW to transfer responsibility for administering the health plans to the union. Under this agreement, voluntary employee beneficiary associations (VEBAs) were created to manage retiree health plans starting January 1, 2010, and the automakers agreed to make several cash contributions of specific amounts (totaling about $10.3 billion for Chrysler and up to $26 billion for GM) on specific dates to fund the VEBAs. Chrysler and GM are currently negotiating with the union to provide a portion of their monetary contribution as equity in the companies rather than cash. Dealerships: The Detroit 3 have about 14,000 U.S. dealerships, most of which are independently owned and operated. Many are struggling financially due to low sales and lack of credit to purchase inventory from the automakers. In addition, in comparison to transplants, the Detroit 3 automakers generally have more dealers and sell fewer vehicles per dealer. According to Automotive News, more than 900 dealers have closed during the last year, due in part to the current economic conditions. Employment at dealers—with more than 1 million jobs—has also fallen. Bondholders and other creditors: Individual and institutional investors hold about $27.2 billion of unsecured GM bonds, and GM is currently engaged in negotiations with its bondholders to reduce this debt by at least two-thirds through an exchange of the bonds into company equity, or other appropriate means. Chrysler, which does not have significant unsecured public debt, has proposed debt restructuring to three creditor groups, which would convert $5 billion of debt to equity. Shareholders: GM, as a publicly traded company, has experienced a significant decline in the price per share of its common stock. In October 2007, GM’s equity traded at levels over $40 per share; in March 2009 the equity traded for a low of $1.45 per share. Chrysler, which is privately owned, currently has two shareholders. Chrysler reported in its restructuring plan that these shareholders have expressed willingness to relinquish their current equity and to convert their debt to equity. To the extent that restructuring efforts result in additional equity, the interest of GM’s and Chrysler’s current shareholders’ will be diluted, which would affect the voting of shares and any future dividends. The sharp downturn in the U.S. auto industry has been influenced by a convergence of factors, including both those within and outside the control of the automakers. According to reports on the auto industry and individuals with expertise in the industry, the following factors contributed to this downturn. Economic factors contributing to the downturn include the weak economy and competition from transplants, which have led to decreased sales and market share. The U.S. economy has been in recession since December 2007, with increasing unemployment and declining personal wealth. During this time period, light vehicle sales in the United States— including domestic and foreign brands—have dropped by about half, with the decrease disproportionately affecting the Detroit 3. For example, Detroit 3 sales in the United States dropped by 49 percent from February 2008 through February 2009, whereas U.S. sales for Honda, Nissan, and Toyota dropped 39 percent during this period. Additionally, the Detroit 3 have been losing U.S. market share to foreign automakers for several years. For instance, GM’s U.S. market share for total light vehicle retail sales fell from 27.2 percent in 2004 to 22.1 percent in 2008, while during the same period, the market share of Japanese auto manufacturers grew from 29.8 percent to 38.9 percent. In addition, the recession has made credit less available, which may have limited the ability of auto manufacturers and suppliers to finance their businesses, consumers to purchase cars, and dealers to obtain loans to sustain their inventories. Figure 1 illustrates the financial relationships among suppliers, automakers, dealers, consumers, and financing companies. Management decisions that, according to members of our panel, have contributed to the automakers’ financial condition include labor agreements that resulted in wages and retiree benefit costs higher than those of transplants and a heavy reliance on sales of light trucks and sport utility vehicles (SUV), which are more profitable than cars. Additionally, offering consumer incentives and discounts over the past few years stimulated demand but contributed to an erosion of the value of the brands and to average purchase prices that are lower than comparable foreign cars. As a result of the lower purchase prices, Chrysler and GM have to sell more cars in order to cover costs. In December 2008, the chief executive officers (CEOs) of Chrysler, Ford, and GM testified before Congress to request financial assistance from the federal government. In their testimonies, the CEOs from Chrysler and GM stated that without federal assistance, their companies would likely run out of the cash needed to continue operating. The Chrysler and GM CEOs further testified that they believed it would be difficult or impossib to return to financial solvency while operating under bankruptcy becaus consumers would be reluctant to make a long-term purchase such as an automobile from a company whose future was in question. We have previously identified three fundamental principles that can serve as a framework for considering federal government financial assistance to large firms. According to these principles, the federal government should (1) identify and define the problem, (2) determine the national interests and set clear goals and objectives that address the problem, and (3) protect the government’s interests. Table 1 provides a description of these principles as they apply to the assistance provided to the auto industry. In an attempt to help stabilize the U.S. automotive industry and avoid disruptions that would pose systemic risk to the nation’s economy, in December 2008 Treasury established AIFP and agreed to provide Chrysler and GM with loans of $4 billion and $13.4 billion, respectively. These loans were intended to allow the automakers to continue operating through the first quarter of 2009 while working out details of their plans to achieve and sustain long-term viability, recognizing that after that point, additional loans or other steps would be needed. According to Chrysler and GM officials, the companies have been using the loans to cover routine operating costs. As a condition of the December loan agreements, Chrysler and GM were required to submit restructuring plans to Treasury in February that describe actions the automakers would take to achieve and sustain long-term viability. These plans were required to show how the automakers would repay the loans, comply with federal fuel economy requirements, develop a product mix and cost structure that are competitive in the U.S. marketplace, and become financially viable. Chrysler and GM submitted these plans on February 17, 2009, and requested up to an additional $5 billion and $16.6 billion in federal financial assistance, respectively, because of the continued sluggish economy and lower than expected revenues. To oversee the federal financial assistance—including evaluating the restructuring plans—and to make decisions about future assistance to the automakers, the loan agreements provided for a presidential designee. Rather than appoint a presidential designee, President Obama on February 20, 2009, announced that he was establishing the Presidential Task Force on the Auto Industry to advise him and the Secretary of the Treasury on issues impacting the financial health of the industry. Under the terms of the loan agreements, since no presidential designee was appointed, the Secretary of the Treasury will make decisions on all matters involving financial assistance to the automakers, including future decisions about providing additional assistance to Chrysler or GM. On March 30, 2009, the President announced that the restructuring plans submitted by Chrysler and GM did not establish a credible path to viability and do not justify substantial new investment of taxpayer dollars. The President outlined a series of actions that each company must undertake to receive additional federal assistance. The President’s announcement further said that Treasury officials will work closely with Chrysler and GM as the companies take steps to achieve the following. Chrysler: According to the Task Force, Chrysler is not viable as a stand-alone company and must find a partner to achieve long-term viability. Chrysler and the European automaker Fiat are in discussions about such a partnership, but additional work must be completed to result in a binding agreement and gain the necessary support of stakeholders. Treasury agreed to provide Chrysler with up to $500 million in loans under TARP to fund its operations for 30 days while the company takes additional steps toward restructuring. If Chrysler is successful in completing the additional steps, Treasury said it will consider investing up to an additional $6 billion in Chrysler. If not, Treasury will not provide further federal assistance, which, according to Treasury officials, would likely result in a liquidation bankruptcy. GM: The Task Force concluded that GM can be a viable company if it develops a more aggressive restructuring plan and implementation strategy. Treasury agreed to provide GM up to $5 billion in loans under TARP to fund its operations for 60 days while it undertakes the additional work. Treasury also announced this restructuring effort would entail leadership changes at GM and increased involvement by Treasury and its outside advisers. If GM submits a satisfactory restructuring plan and implementation strategy by the end of the 60 days, Treasury will invest an unspecified amount of additional federal funds to help with GM’s restructuring efforts. If, however, GM fails to meet these conditions, according to Treasury, it will not invest additional federal funds, creating the possibility that GM will file for a reorganization bankruptcy. GM’s CEO stated that the Treasury’s determination makes a bankruptcy filing for GM more “probable” than prior to the announcement. Several new initiatives to help stabilize the auto industry and bring relief to those affected by the industry were announced in March 2009. The first two initiatives will be administered through AIFP and will be funded under TARP. The third initiative will seek to leverage federal funding available through other programs. Supplier Support Program: Under this program, Chrysler and GM will receive funding for the purpose of ensuring payment to suppliers. The program is designed to ensure that automakers receive the parts and components they need to manufacture vehicles and that suppliers have access to credit from lenders. The automakers will designate certain suppliers who are most critical to their operations to receive guaranteed payment for delivered supplies. After agreeing to participate in the program, the supplier sells eligible receivables to a special purpose entity established by the automaker to fund the program. Prior to the sale of the receivable, the automaker owes the supplier a payment for the receivable at a due date. If the supplier sells a receivable to the program, it receives payment from the special purpose entity, which becomes the owner of the receivable. If the supplier chooses to receive cash up front, a service fee of 3 percent is deducted from the payment; if the supplier chooses to receive payment on the receivable’s due date—typically 45 to 60 days after delivery—the service fee is 2 percent. On the due date, the automaker is responsible for paying the program servicer the amount due for the delivery. Treasury has made up to $5 billion available through this program. Warranty Commitment Program: This program is intended to mitigate potential consumer reluctance to buy a vehicle from a financially distressed company by providing funding to guarantee the warranties on new vehicles purchased from participating auto manufacturers during the restructuring period. Under this program, participating automakers (currently Chrysler and GM) and Treasury will contribute cash to a separate special purpose company. The total amount of cash to be contributed will equal 125 percent of the expected cost of paying for warranty service on each covered vehicle, with the automakers contributing 15 percent of the projected costs and Treasury providing a loan to contribute 110 percent of the projected cost. Should a participating automaker go out of business, a program administrator will be appointed to identify a qualified service provider to supply warranty services for vehicles sold during the restructuring period in exchange for the assets of the special purpose company. Treasury officials estimate the cost of this program to be about $1.1 billion. According to several members of our panel, addressing consumers’ concerns about warranties is important because, unlike buying a plane ticket from a bankrupt airline, purchasing a vehicle is a significant and long-term investment. Thus, consumers may avoid purchasing vehicles from an automaker facing the possibility of bankruptcy because they are concerned their warranties may not be honored, further depressing vehicle sales. Initiative to Support and Revitalize Auto Industry Workers and Communities: This initiative is intended to coordinate government efforts in providing assistance to communities and workers affected by the loss of auto manufacturing jobs. The director responsible for the initiative is tasked with working with all parties to ensure that communities and workers take advantage of all available government resources and to work with government and elected officials in helping retool and revitalize the economies of affected communities. In carrying out his duties, the director is charged with exploring all possible strategies, including seeking to maximize the use of funds from the American Recovery and Reinvestment Act of 2009 (Recovery Act), deploying rapid response units to communities facing plant closings, attracting new industries to the region, and working with stakeholders on legislative efforts to direct emergency support to the affected communities. The programs that Treasury has announced for the auto industry— including the automakers, auto financing companies, and other stakeholders—as of April 2009, are summarized in table 2. Treasury identified as a problem of national interest the financial condition of the U.S. automakers and its potential to affect financial market stability and the economy at large. In determining what actions to take to address this problem, Treasury concluded that Chrysler and GM’s lack of liquidity needed immediate attention and, in order to prevent a significant disruption of the automotive industry, provided short-term bridge loans to the automakers. To address the industry’s structural challenges, which will take more time to resolve, Treasury required Chrysler and GM to prepare restructuring plans that describe the changes the automakers intend to make in order to achieve long-term financial viability. Treasury established goals and objectives for the federal financial assistance in the loan agreements and other program documentation. For example, the loan agreements state that funding should be used to enable the automakers to develop a viable and competitive business and develop the capacity to produce energy-efficient advanced technology vehicles, among other things. Although Treasury identified goals for the assistance, it will need to determine how to assess goals that rely on concepts that are not clearly defined and to evaluate the relevant trade-offs associated with the goals that appear to conflict. For example, the goals stated in the loan agreements include concepts that were not defined, such as rationalized manufacturing capacity and competitive product mix. If additional assistance is provided to the automakers, it will be important for Treasury to clearly articulate what it intends to achieve with this assistance. We have previously reported that it is important for policymakers to identify objectives and goals for federal assistance that are clear, concise, and consistent. Such objectives and goals can help program administrators and Congress determine which financial tools are needed and most appropriate for the industry and for company-specific circumstances; provide criteria for program decisions; and serve as a basis for monitoring progress. In addition to lacking clear definitions, some of Treasury’s goals may work at cross purposes, at least in the short-term, and thus will require an assessment of the relevant trade-offs among the goals. For example, according to members of our panel, producing advanced technology vehicles has the potential to conflict with the goal of developing a viable business in the near term because the costs of designing, developing, and producing these types of vehicles are greater than the revenue generated in the initial years of sales. We have previously reported that it is important that policymakers choose clearly among potentially conflicting goals of providing federal financial assistance. Without knowing the primary goal, it is difficult to decide what steps are appropriate and to judge whether a program has succeeded. In developing the terms and conditions of the loans to Chrysler and GM, Treasury included provisions to manage risk and protect the government’s interest. Table 3 describes these provisions. Treasury also established an internal working group—referred to as the auto team—to oversee the AIFP and provide analysis in support of the Task Force and the Secretary. While the loan agreements include a number of terms and conditions to help protect the government’s interests, some potential risks, as described below, remain. Concessions from stakeholders. The loan agreements called for stakeholder concessions, including agreements from creditors to reduce overall debt, from labor for more competitive wage structures, and from retirees for modifications to VEBA contributions, as well as limits on executive compensation. Agreements with debtholders: According to Chrysler officials, the company does not have substantial public debt, but it said in its restructuring plan that it would work with three groups of creditors, including Treasury, senior lien bank lenders, and the UAW VEBA, to reduce debt by $5 billion. GM stated in its restructuring plan that it was negotiating a potential debt-for-equity exchange with an unofficial committee of GM bondholders. As of April 22, although the automakers have begun negotiations with their bondholders (in the case of GM) and creditors (in the case of Chrysler) agreement has not been reached. Labor and retiree concessions: Chrysler’s and GM’s negotiations with the UAW continue, and tentative agreements have been reached on modifications to labor costs and work rules. For instance, General Motors and the UAW reached a tentative agreement modifying wages, benefits and work rules to become more cost competitive with transplants. The net effect of these changes is a reduction in the company’s annual hourly- related cost by approximately $1.0 billion to $1.1 billion, and potentially more, according to GM. As of April 22, agreements on restructuring Chrysler and GM’s monetary contributions to fund retirees’ health care plans have not been reached. Executive compensation: According to Treasury officials, they are waiting for the Office of Management and Budget to approve additional regulations that Treasury has drafted on executive compensation as required by the Recovery Act before establishing a process to monitor compliance with the executive compensation requirements. Establishing procedures to oversee compliance with such requirements is important to help ensure that the automakers adhere to conditions set forth in the loan agreements. Collateral. Treasury’s goal in its negotiations with Chrysler and GM prior to signing the loan agreements was to obtain senior liens whenever possible and, for assets already encumbered, to obtain junior liens. For Chrysler, because most assets were already encumbered with senior liens, Treasury was only able to obtain a senior lien on a portion of the company’s parts inventory, known as Mopar. For GM, Treasury obtained a senior lien on cash, inventory, real property, equity in domestic and foreign subsidiaries, and intellectual property. Treasury also received junior liens on additional assets from both companies. According to Treasury officials, Treasury cannot put an estimated dollar value on either company’s pledged collateral because the value of certain items, such as cash and inventory, is constantly changing. Treasury officials said that the limited amount of assets on which the government has senior liens could become an issue if the companies enter bankruptcy or otherwise liquidate their assets, although the situation differs somewhat for the two companies. According to Treasury, in the case of Chrysler, the sale of the assets would result in cash equal to only a small percentage of the value of the loans. Moreover, because Treasury placed liens on all unencumbered assets to secure the December loans, it will be difficult or impossible for the government to obtain additional collateral for any new loans that may be provided. In its restructuring plan, GM proposed that additional federal assistance could be in the form of a preferred equity investment in the company, a revolving facility, and a loan secured by the collateral already used to support the current $13.4 billion loan. Chrysler did not propose collateral options for any additional federal assistance in its restructuring plan. In considering whether the federal government should provide additional assistance to Chrysler and GM, it is important to assess the government’s overall financial exposure should one or both of the automakers fail to achieve long-term viability. A potential area of significant financial exposure is the government’s liability for terminated pension plans. Specifically, the Pension Benefit Guaranty Corporation (PBGC)—a self- funded government corporation—insures private-sector defined benefit plans. When PBGC takes over a terminated pension plan, it assumes responsibility for future benefit payments to the plan’s participants, up to the limits set in law. An underfunded pension plan that is insured by PBGC may be terminated only if certain statutory criteria are met. In general, an employer is permitted to terminate an underfunded plan only if it can demonstrate that it is in serious financial distress and cannot continue in business or reorganize (if in bankruptcy) unless the pension plan is terminated. The pension plans of Chrysler and GM pose considerable financial uncertainty to PBGC. In the event that Chrysler or GM cannot continue to maintain their pension plans—such as in the case of liquidation or an asset sale—PBGC may be required to take responsibility for paying the benefits for the plans, which are currently underfunded by a total of about $29 billion. Although it is impossible to know what the exact claims to PBGC would be if it took over Chrysler’s and GM’s pension plans, doing so would likely strain PBGC’s resources, because the automakers plans’ represent a significant portion of the benefits it insures. Further, from an administrative standpoint, PBGC would be presented with an unprecedented number of assets to manage as well as benefit liabilities to administer. To the extent these additional claims markedly increase PBGC’s accumulated deficit and decrease its long-run liquidity, there could be pressure for the federal government to provide PBGC financial assistance to avoid reductions in guaranteed payments to retirees or unsustainable increases in the premium burden on sponsors of ongoing plans. In general, we found that Chrysler’s and GM’s February restructuring plans contain some of the key factors our panel of individuals with auto industry expertise identified as important for achieving viability, such as reducing the number of models and brands and rationalizing dealerships. However, the plans do not fully address all of the considerations that members of the panel identified, which are discussed below. Treasury identified similar concerns and concluded that Chrysler and GM need to establish a new strategy for long-term viability in order to justify substantial additional investment of federal funds. Achieving viability may be difficult because of a number of challenges facing the automakers, including some outside of their control. Reducing the number of brands and models About half of the members of our panel said that reducing the number of brands and models would be a key factor in achieving financial viability. Some of the cited advantages of eliminating brands and models include reducing intracompany competition for sales of similar models, eliminating associated costs such as factory tooling and product development, and focusing remaining resources on fewer models for greater improvements in quality, brand image, and performance. One panelist further noted that eliminating brands and models also eliminates dealers, another cost savings, although as discussed below, there are costs associated with closing dealers that can be difficult to estimate. According to its February plan, Chrysler has reduced its number of vehicle models by seven. However, some members of our panel criticized Chrysler’s product mix; for example, one panelist noted that most of Chrysler’s product line contains older models and that to be competitive the company needs to introduce more new products in 2009. Another noted that Chrysler has plans for only one midsize model and no luxury models to compete with models from other companies. GM’s February plan proposes to reduce its brands to the four core brands that account for more than 90 percent of the company’s U.S. aggregate contribution margin (revenue less variable cost) by selling or phasing out three brands. One panelist noted that GM’s focus on the remaining four brands is a good long-term strategy, although another noted that this may cause difficulties in short-term sales because consumers may be unlikely to buy cars from a brand that is being discontinued. GM’s plan also includes a total reduction in the number of models by 25 percent, including the reduction of models from brands that GM is planning to sell or phase out. Within the planned overall reduction in the number of models, GM is planning to introduce five new hybrid and plug-in models by 2012, bringing the total of such models to 14. These new models would include at least one extended range electric vehicle; however, members of our panel cautioned that this electric model may not sufficiently improve GM’s viability because the car is expected to be priced too high to result in substantial sales. Treasury identified similar challenges related to both Chrysler’s and GM’s product mixes. According to Treasury, given that Chrysler and GM rely on profits from trucks and SUVs, which typically have higher profit margins than smaller vehicles, both companies face challenges due to the vulnerability of demand for these vehicles based on fuel prices. Treasury also concluded GM is currently burdened with underperforming brands and models and that GM’s plan does not act aggressively enough to curb these problems. Treasury noted that although the decision to sell or phase out three brands is an important step, GM is late in taking this step. Additionally, Treasury determined that GM’s current plan retains too many unprofitable models that have negative effects on GM’s operations. Half of the panelists considered decreasing the size of the domestic automakers’ dealership networks to be an important factor for future viability, with several noting that the networks are too large to be supported by the sales levels of recent years. Today, Detroit 3 dealerships—many of which are independently owned and operated—are more numerous and, in general, sell half or fewer vehicles per dealership than transplant dealerships. As one panelist noted, higher sales per store allow for a greater return on the dealer’s fixed costs of running the business, allowing for more investment in facilities and advertising— which ultimately benefits the automaker by improving the price for which its cars can be sold. Chrysler’s plan for reducing dealerships includes merging its three brands—Chrysler, Jeep, and Dodge—into combined dealerships rather than having separate dealerships for each brand. Although the plan indicates that Chrysler has reduced its number of dealerships by about 700 since 2004, the plan does not indicate how many additional dealerships can be eliminated through combined dealerships. A Chrysler official also noted that because of unfavorable market conditions, many dealers are choosing to close or consolidate with other dealers. GM has already reduced the size of its dealership network and plans to further reduce it from its 2008 level of 6,246 to 4,100 in 2014. GM’s plan also indicates specifically which brands and locations (metropolitan or rural markets, for instance) will be targeted for reductions. Several members of our panel told us that eliminating dealerships against their will would be challenging due to state franchise laws that protect dealers, as discussed later in this report, and therefore the companies would need to negotiate with the dealers. Chrysler’s plan does not discuss such negotiations and associated costs, such as buying back dealer inventory; however, GM’s plan acknowledges that each negotiation is unique depending on factors such as the individual state law, the dealer, possible union contracts, and associated finance and warranty business, and that the costs of terminating a dealership can vary greatly. Treasury concluded that although GM has been successfully pruning dealerships for several years, more aggressive restructuring is needed. According to Treasury, GM’s current pace for reducing the number of dealerships will burden the company with too many unprofitable or underperforming dealerships for a long period of time, which hurts brand equity and the prospects of stronger dealerships. Reducing production costs and capacity According to our panel, the companies have excess production capacity and their cost structures do not facilitate the companies’ profitable operation in a market in which sales volumes are significantly lower than they have been in past years. Panelists told us that the companies’ cost structures were established during a time when they dominated the U.S. market, and as foreign competition grew, their market shares decreased. Some of the panelists added that rather than adjust their cost structures, such as by reducing fixed costs, the companies pursued higher sales volumes to try to profitably operate under their existing cost structure. Given the forecast for continued decreased sales volumes, members of our panel said that they expected the restructuring plans to identify significant reductions in fixed costs. Additionally, these individuals said the automakers could benefit from incorporating efficiencies used by some of the foreign automakers into their production processes, such as manufacturing multiple types of vehicles at the same production facility or relying more on common vehicle architectures for the production of vehicles. Common vehicle architectures can allow automakers to plan, design, engineer and source vehicles for all global markets, whereas previously these efforts may have differed based on whether a car was to be sold in the United States or Europe, for example. According to Chrysler’s February plan, the company began restructuring in 2007 to reduce fixed costs, and, by the end of 2009, these costs will have been reduced by $3.8 billion (27 percent), which includes a reduction of its salaried workforce by 35,000. In addition, Chrysler is requesting a 3 percent reduction in suppliers’ prices. However, some members of our panel said that reliance on supplier price cuts is a problematic assumption because the suppliers are struggling financially and cannot afford to reduce their prices. Chrysler’s plan does not address specific plans for production flexibilities. According to GM’s February plan, the company plans to reduce its North American fixed costs by about $6 billion from 2008 to 2011 and keep those cost levels constant through 2014. These savings are largely the result of the initiatives outlined in the plan and include the reduction of U.S. employment levels (hourly and salaried) by about 20,000 from 2008 to 2011, acceleration of labor cost parity with transplants, idling of 14 additional manufacturing facility in the United States by 2012, and reduction of 12 models offered in the United States by 2012. However, some members of our panel cautioned that the company may not be able to “cut its way to prosperity” and that GM needs to have a plan for how remaining salaried workers will carry out the restructuring efforts. GM’s plan also indicates that the company plans to increase production flexibility by increasing the number of plants that can produce multiple vehicle models and that by 2012, more than half of its U.S. passenger car sales will be derived from common architectures. Treasury concluded that although both Chrysler and GM have made progress related to manufacturing, Chrysler still faces challenges in this area. Treasury noted that Chrysler’s plan identified opportunities for reducing the company’s cost structure, including fixed-cost reductions; however, manufacturing is still a key challenge for Chrysler because it has not invested significantly in common architectures and manufacturing flexibility. In contrast, Treasury said that GM has made material progress in creating common architectures and has worked to create greater flexibility in its facilities. According to Treasury, GM’s actions in this area allow it to spread its product development and fixed costs over a large range of vehicles; in contrast, Treasury identified Chrysler’s scale as a challenge because the company must spread fixed costs over a smaller number of vehicles, which may limit funding for the research and development needed to maintain competitiveness. A number of panelists attributed the domestic automakers’ current financial condition in part to the labor agreements with the existing workforce, as well as health-care and pension costs associated with the companies’ retirees. Several of them noted that Chrysler’s and GM’s labor costs are higher than those of transplants primarily because of more generous healthcare benefits for workers. Others noted that work rules contained in the labor agreements can increase costs and limit production flexibility. According to the companies’ plans, the UAW, and our panelists, previous labor agreements reached between the UAW and the automakers are helping to restructure labor costs to be competitive with transplants, by, for instance, bringing in new hires in nonskilled trades at a substantially lower wage rate than current workers, but some members of our panel said that more needs to be done in this area. Both Chrysler’s and GM’s February restructuring plans discuss proposed labor concessions, but no final agreements have been reached to date. According to Chrysler’s plan, it has a tentative agreement with the UAW to implement labor terms competitive with those of transplants. The tentative agreement includes adjustments to levels of compensation, work rules, and severance provisions such as elimination of the Jobs Bank program, which provided income and benefit protection in lieu of layoffs. Similarly, GM’s plan indicates that the company has reached agreement with the UAW to implement competitive work rules and to reduce labor costs. GM’s plan also discusses some labor concessions that are in the process of being implemented, namely reducing costs through buyouts. Neither company has reached an agreement with the UAW to reduce cash contributions to the VEBAs to fund retirees’ healthcare plans, also part of Chrysler’s and GM’s plans to achieve viability. According to the UAW, union members will not vote to ratify the labor modifications (e.g., compensation and work rules) until a tentative agreement has been reached on the modification to VEBA contributions. Treasury concluded that neither company has satisfied the terms of the loan agreements, in part, because neither reached approval on labor and VEBA modifications. Treasury also identified liabilities associated with pensions and health care for retirees as a challenge for GM, given that the company would need to sell 900,000 additional cars per year to cover its future cash payments for these costs. According to Treasury, these costs leave GM aiming to maximize sale volumes rather than focusing on return on investment. Relying on realistic estimates for sales volumes, market share, and other assumptions Members of our panel said that the success of the plans would depend on whether the underlying assumptions for sales, market share, and possible future financial assistance were realistic. They cautioned against basing estimates for viability on assumptions of an immediate increase in sales volumes or in the Detroit 3’s market share. Some of the panelists attributed the automakers’ financial struggles, in part, to the companies’ historical reliance on unrealistic expectations of sales volumes and market share that were not later met. As previously discussed, given their existing cost structure, the companies must have high sales volumes in order to achieve profitability. However, if the companies’ forecasts for sales volumes and market share are too optimistic compared to actual consumer demand, the restructuring plans may not result in financial viability without further modifications to the restructuring plans. Therefore, some panelists said that restructuring efforts need to rely on realistic or conservative assumptions about sales volumes and market share. Regarding sales assumptions, Chrysler’s baseline plan relies on 10.1 million unit sales in the United States for cars and light trucks in 2009, and GM’s baseline plan relies on 10.5 million unit sales. Both plans include 2009 downside scenario sales estimates that are 1 million unit sales lower than their 2009 baseline scenario sales estimates (9.1 million, for Chrysler; and 9.5 million, for GM). Some panelists told us that they thought the automakers’ baseline sales estimates were realistic. With respect to market share, they said the companies should provide analysis to support their market share assumptions, given that the companies have been losing market share for decades while continuing to project gains in market share. GM’s plan includes some key assumptions that drive its market share analysis; however the plan does not indicate to what extent each of these assumptions affects market share estimates. Chrysler’s plan does not identify the assumptions that contribute to its market share estimates. One panelist commended GM for acknowledging the potential for dropping from its 2008 U.S. market share of 22 percent to below 20 percent market share, although another cautioned that GM may not be able to maintain more than 16 percent market share. A few of the panelists noted that sales projections may not be realized due to the effect of eliminating or discontinuing brands because buyers interested in those brands may turn to competitors’ products, rather than to other brands of GM and Chrysler. The February plans also assume assistance from other entities, including loans from the Department of Energy (DOE), an alliance with another automaker (in the case of Chrysler), and loans from foreign governments (in the case of GM). In addition to the AIFP funding the automakers requested in their February restructuring plans, Chrysler’s plan assumes $6 billion in DOE loans and GM’s plan assumes $7.7 billion in DOE loans. However, DOE has not completed its review of either company’s application, in part, because DOE’s program rules require loan recipients to be financially viable. DOE officials told us that they cannot finish reviewing Chrysler’s and GM’s applications until Treasury makes a final determination on the companies’ viability, and that DOE will coordinate with Treasury in making that determination. Additionally, Chrysler’s plan indicates that to be viable on a long-term basis, the company must pursue strategic alliances and includes a scenario based on a proposed alliance between Chrysler and Fiat, a European car company. Chrysler states in its plan that this alliance would provide Fiat with an equity stake in Chrysler and will provide Chrysler access to Fiat’s smaller, fuel-efficient platforms and technologies, as well as Fiat’s international dealer network. However, the alliance does not provide any financial resources, for example, through equity contributions to Chrysler. Chrysler also states in its plan that even with a Fiat alliance, the company would struggle if sales fall below its downside estimate. In its plan, GM assumes it will receive about $6 billion in financial assistance from foreign governments to be able to maintain adequate cash balances for its global operations through the beginning of economic recovery. The company’s restructuring plan details the progress of ongoing discussions with governments in Australia, Canada, Europe, and Asia in order to achieve viable operations in those regions. GM submitted a separate restructuring plan to the Canadian government on February 20, 2009, which the Canadian government found to be insufficient. Treasury criticized several of the automakers’ assumptions as being too optimistic or too aggressive. Treasury noted that Chrysler assumes it will maintain its market share even though it has lost market share over the last decade and there are few signs it can reverse this trend. Similarly, Treasury determined that GM’s market share assumptions are too optimistic. GM has been losing 0.8 percent market share annually over the last 30 years and its plan assumes a slower rate—0.3 percent per year—of market share decline. With regard to pricing assumptions, Treasury stated that it will be challenging for Chrysler to maintain pricing as projected in its plan given what Treasury characterized as the perception of poorer product quality. With respect to GM, Treasury noted that its plan does not assume a decreased contribution margin despite a severely distressed market and the company’s plan focuses on passenger cars and crossovers, which traditionally have earned lower contribution margins than trucks and SUVs. Additionally, Treasury concluded that GM’s assumption of European assistance represents a risk to the viability of its plan because funds from European governments have not been allocated. The automakers are confronting a number of challenging conditions in their efforts to restructure in a way that will achieve and sustain long-term viability, according to members of our panel and research we reviewed. Some of the challenges are the same ones that led to the automakers’ current condition, such as the weak economy and changing consumer preferences. Although Chrysler and GM acknowledged many of these challenges in their restructuring plans, many are beyond their control. The poor condition of the U.S. economy will likely continue to affect the financial health of Chrysler and GM. As figure 2 shows, over the past 30 years, automobile sales almost always decreased during periods of economic recession. Chrysler and GM officials, as well as some panelists, noted that the current recession has had a similar effect on consumer confidence in general and automotive purchases in particular. Some panelists attributed this pattern to the discretionary nature of automobile purchases—that is, these purchases are easily postponed during periods of economic downturn. Reflecting the current economic conditions and projected slow recovery, both Chrysler and GM revised their sales projections downward, as noted above. However, if the economy recovers more slowly than the companies anticipate and sales revenues are lower than projected, the companies may not be able to achieve viability according to schedule and under the conditions laid out in their plans. For instance, both Chrysler and GM noted that their downside scenarios, which will occur if sales volumes are lower than expected, would result in the need for more federal funding than their baseline scenarios. However, although GM’s assumption about economic growth (measured by gross domestic product) for 2009 was characterized as more conservative than other estimates, this assumption now looks optimistic compared to Congressional Budget Office and IHS Global Insight estimates. The continuing lack of credit availability—on both a consumer and institutional level—is a major challenge for the automakers. A substantial amount of vehicle financing is obtained through asset-backed securities (ABS) transactions, which provide liquidity to the automotive financing companies, such as GMAC and Chrysler Financial, and enable dealer and consumer financing. However, due to conditions in the capital markets, considerably less of this type of financing is occurring. In turn, this has affected the ability of dealers to offer retail financing to consumers. Because almost all consumers rely on some level of financing to purchase automobiles, this lack of credit has negatively impacted sales. In addition, the lack of credit availability has affected dealers’ ability to finance their inventory (referred to as floorplan financing). Since dealers purchase vehicles from the automakers, the lack of floorplan financing also negatively impacts the automakers’ revenues. Given the role the automotive financing companies play in vehicle sales, Chrysler and GM indicated in their restructuring plans that the financial health of Chrysler Financial and GMAC is critical to their financial viability. As noted earlier, both GMAC and Chrysler Financial have received federal financial assistance through AIFP. To increase the availability of credit for consumers, Treasury and the Federal Reserve have announced the Term Asset-Backed Securities Loan Facility (TALF) program, which will provide financing to investors for purchases of ABS and could generate up to $1 trillion in lending for individuals and businesses. Eligible ABS includes newly issued AAA- rated tranches of securitizations backed by auto loans. However, officials from the automakers and auto financing companies we interviewed expressed concern about the AAA-rating requirement, noting that under such a requirement certain of the auto financing companies’ securities would not be eligible. Officials from Ford, GM, and Chrysler, as well as members of our panel, stated that the tenuous financial condition of auto suppliers is a major concern because the solvency of the supply chain is critical to the automakers’ viability. As Ford’s CEO noted in his December 2008 congressional testimony, the domestic auto manufacturing industry is interdependent, especially in the area of suppliers, with an estimated 80 percent overlap in supplier networks. Thus, according to the automakers and some panelists, the collapse of one or more of the domestic automakers would affect the remaining automakers because, among other things, such a collapse could impact the ability of shared suppliers to continue operations. Ford also noted that a supplier financing safety net— such as guarantees on payment from the federal government—would help prevent this situation. Moreover, large production cuts due to sluggish sales, especially in the first quarter of 2009, have affected the cash flow and liquidity of many automotive suppliers. According to the Motor & Equipment Manufacturers Association (MEMA), more than 40 major suppliers filed for Chapter 11 restructuring in 2008, with industry surveys indicating approximately one-third of all suppliers are in imminent financial distress. As previously noted, Treasury announced in March it would provide up to $5 billion in assistance to help suppliers. Cost of developing advanced technology vehicles Several panelists noted that not only is developing advanced technology vehicles expensive, but also the return on the investment in those vehicles can be low because the initial demand for new technologies can be slow to develop. For example, the Toyota Prius was on the market for 10 years before reaching 1 million units sold. According to our panel, given the high development costs and low initial demand, especially if gasoline prices remain relatively low, these new vehicles are not likely to generate a profit for several years. Thus, changing the companies’ product mix to include more advanced technology vehicles may not be the best way to improve the financial bottom line in the short term. Furthermore, at least one panelist questioned whether the necessary energy infrastructure, such as electrical outlets to charge batteries, will be available to support these new technologies. Without adequate infrastructure, consumers will be reluctant to purchase these new advanced technology vehicles. GM officials acknowledged these challenges but indicated that the company decided to continue investing in advanced technologies even during the current financial crisis because they need this technology in their fleet to help meet federal fuel economy standards in the future. In addition, GM officials said they are planning for higher oil prices than current futures market expectations, in order to make GM’s plan more robust against oil price volatility. Reducing the number of dealerships to align with sales volumes Many panelists said that it will be difficult for Chrysler and GM to resize their dealership networks. The large number of dealers increases intra-brand competition and thus reduces the pricing power of individual dealers. One GM official noted that the biggest competition for a GM dealer is often the other GM dealer down the street. As previously mentioned, Detroit 3 dealerships sell substantially fewer vehicles per dealership than transplant dealerships sell. Given these and other concerns, Chrysler, Ford, and GM are working to “right size” their dealer networks to better align with automakers’ current and projected sales volumes and market shares. However, panelists told us state franchise laws make eliminating dealerships difficult because these laws generally provide strong protections for auto dealer franchisees. For example, Michigan’s law on auto dealer franchises states that manufacturers must provide adequate notice, act in good faith, and have good cause in order to terminate an agreement with a dealer. Any action to consolidate or eliminate a dealer—outside of a bankruptcy court—must be negotiated with the affected dealers. According to members of our panel, under the best-case scenarios, the automakers can expect to incur significant costs and delays in rationalizing their dealership networks. Given the current depressed level of automobile sales, automakers and panelists also told us that some dealers are looking either to go out of business voluntarily or to merge their business with other dealerships. Uncertainty over future fuel economy standards The current uncertainty of future fuel economy standards could complicate the auto manufacturers’ ability to plan for future market conditions. The National Highway Traffic Safety Administration (NHTSA), within the Department of Transportation, issues fuel economy standards for vehicles sold in the United States. Currently, fuel economy standards are set through model year 2011. NHTSA officials told us they plan to propose standards for model years 2012 through 2016 this summer and issue final standards by March 31, 2010. Further, according to NHTSA, it must coordinate the rule making with the Environmental Protection Agency (EPA). EPA will be responsible for setting standards regarding the level of greenhouse gases passenger vehicles can emit if it adopts its proposed finding that greenhouse gases in the atmosphere endanger the public health and welfare. In addition, NHTSA officials said they were monitoring events relating to California’s and other states’ attempts to set and enforce individual greenhouse gas emission standards for passenger vehicles. Chrysler and GM officials told us they would prefer one national standard to individual state standards. If NHTSA raised the fuel economy standards above what the automakers have planned for their near-term product line, or if states are allowed to set individual standards, it could complicate the viability plans of the auto manufacturers by forcing them to make faster, more costly technological investments in their vehicles than they otherwise had planned. NHTSA officials told us that when setting future fuel economy standards, they would take into account the ability of the auto industry to make the necessary technological investments in its products to increase fuel economy. Restructuring the automakers’ balance sheets by reducing debt and related leverage are critical elements to any plan for long-term viability. As of December 31, 2008, GM had total liabilities of $176.4 billion compared to negative stockholders’ equity of $86.2 billion. GM’s liabilities of $176.4 billion included current liabilities (payable in 2009) of $73.9 billion and noncurrent liabilities of $54.1 billion for pensions and postretirement benefits and $29.6 billion of long-term debt. The loan agreement calls for GM’s “best efforts” to reduce its unsecured public debt by at least two- thirds. As of December 31, 2008, GM had about $27.2 billion of unsecured public debt (consisting of amounts included in GM’s debt payable in 2009 and long-term debt). In its restructuring plan, GM reported that negotiations were under way with its bondholders to convert the unsecured debt to equity. This debt restructuring would reduce interest expense and immediately improve cash flow to GM. Chrysler, which does not have significant unsecured public debt, proposed working with creditors, including Treasury, senior lien bank lenders, and the UAW VEBA, to reduce its debt by $5 billion. According to members of our panel and financial analysts we interviewed, reaching agreements with bondholders could be difficult because the value of company stock is less than the value of the bonds. Bondholders will be trading a known rate of return that is subject to bankruptcy risk for a completely unknown rate of return that is also subject to bankruptcy risk. As a Treasury official noted, however, by not agreeing to the exchange, the bondholders are subject to the risk that the companies could file for bankruptcy, potentially rendering their bonds worthless. According to financial analysts we spoke with, many bondholders are willing to take their chances waiting for more government assistance. Recognizing these challenges, officials from both Chrysler and GM told us they will likely need the assistance of the Presidential Task Force or Treasury to reach agreement with their bondholders or creditors. Given the substantial amount of debt that both Chrysler and GM have, and the uncertainty that revenues from car sales will increase in the near term or that the automakers’ stakeholders will reach an agreement needed for successful restructuring, Treasury and the automakers have acknowledged the very real possibility that restructuring might be accomplished through a reorganization under the bankruptcy code. Under that scenario, according to Treasury, the most likely approach would be a court-supervised asset sale, in which the company’s good assets would be sold to a new entity, and substantial amounts of the company’s debt would remain in possession of the old part of the entity to be dealt with in bankruptcy court. Treasury said this approach would help accelerate the turnaround of the companies by allowing them to quickly exit bankruptcy. According to Treasury, another possibility for restructuring for GM would be a “prepackaged” bankruptcy, in which the company’s creditors approve a reorganization plan before the company files for bankruptcy; however, according to Treasury, it appears unlikely that such an agreement could be reached in the limited amount of time available. Treasury has said it would consider providing bankruptcy financing to Chrysler and GM if the companies meet the conditions Treasury set in its March 30 announcement and if Treasury and the companies determine that a reorganization bankruptcy is the best course of action. We provided a draft of this report to the Departments of the Treasury, Transportation, and Energy for review and comment. These agencies provided technical clarifications, which we incorporated as appropriate. We also made a draft of this report available to Chrysler and GM officials for their review and comment. Chrysler and GM officials provided technical corrections and clarifications, which we incorporated as appropriate. We are sending copies of this report to other interested congressional committees and members, the Departments of the Treasury, Transportation, and Energy, and others. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Katherine Siggerud at (202) 512-2834 or [email protected] or Susan Fleming at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. We contracted with the National Academy of Sciences (NAS) to identify a balanced, diverse group of individuals with expertise about the past and current financial condition and operations of the domestic automakers, the restructuring of distressed companies, labor relations issues, financial management and analysis of distressed or restructuring companies, factors influencing competitiveness in the auto industry, and engine and vehicle technologies that may affect the auto manufacturing industry today as well as in the near future. We selected 17 individuals for interviews from among those NAS identified based on achieving a variety of expertise and avoiding any potential conflicts of interest (see table 4). In addition to the contact names above, the following individuals made important contributions to this report Marcia Carlsen, Nikki Clowers, and Raymond Sendejas, Assistant Directors; Alana Finley; Chuck Ford; Cole Haase; Heather Halliwell; Jennifer Henderson; Joah Iannotta; Matthew LaTour; Susan Michal-Smith; and Susan Sawtelle. | The turmoil in financial markets and the economic downturn has brought significant financial stress to the auto manufacturing industry. The economic reach of the auto industry in the United States is broad, affecting autoworkers, auto suppliers, stock and bondholders, dealers, and certain states. To help stabilize the U.S. auto industry and avoid disruptions that could pose systemic risk to the nation's economy, in December 2008 the Department of the Treasury established the Automotive Industry Financing Program (AIFP) under the Troubled Asset Relief Program (TARP). From December 2008 through March 2009, Treasury has allocated about $36 billion to this program, including loans to Chrysler Holding LLC (Chrysler) and General Motors (GM). GAO has previously identified three principles to guide federal assistance to large firms: define the problem, determine the national interests and set goals and objectives, and protect the government's interests. As part of GAO's statutorily mandated responsibilities to provide timely oversight of TARP activities, this report discusses the (1) nature and purpose of assistance to the auto industry, (2) how the assistance addresses the three principles, and (3) important factors for Chrysler and GM to address in achieving long-term viability and the challenges that they face to become viable. To address these objectives, GAO reviewed Chrysler's and GM's restructuring plans and financial statements, as well as Treasury documents related to AIFP. GAO also reviewed the terms and conditions of the federal loans to identify risks to the government and compared these loan provisions to GAO's principles for providing federal financial assistance to large firms. In addition, GAO interviewed representatives of Chrysler, GM, Ford Motor Company (Ford) and the International Union, United Automobile, Aerospace and Agricultural Implement Workers of America (UAW), and officials from the Departments of the Treasury, Transportation, and Energy. GAO also conducted semistructured interviews with a panel of individuals identified by the National Academy of Sciences for their expertise in the fields of auto industry trends and data, labor relations, vehicle manufacturing, and corporate restructuring. GAO provided a draft of this report to the Departments of the Treasury, Transportation, and Energy for their review and comment. These agencies provided technical clarifications, which GAO incorporated as appropriate. GAO also made a draft of this report available to Chrysler and GM officials for their review and comment. Chrysler and GM officials provided technical corrections and clarifications, which GAO incorporated as appropriate. GAO is not making recommendations in this report. From December 2008 through March 2009, the Treasury Department established a series of programs to help bring relief to the U.S. auto industry and prevent the economic disruptions that a sudden collapse of Chrysler and GM could create. In December 2008, Treasury provided bridge loans of $4 billion to Chrysler and $13.4 billion to GM and required both automakers to submit restructuring plans in February 2009. In March, Treasury determined that the automakers' restructuring plans were not sufficient to achieve long-term viability and required that they take more aggressive action as a condition of receiving additional federal assistance. At the same time, Treasury also established programs to ensure payments to suppliers of parts and components needed to manufacture cars and to guarantee warranties of cars Chrysler and GM sell during the restructuring period. In addition to these programs, the President announced a new White House initiative to help communities and workers affected by the downturn in the industry. In the coming weeks, Treasury will determine whether the additional steps Chrysler and GM have taken or plan to take are sufficient to warrant further assistance. If the companies are successful in implementing the additional steps toward restructuring, then Treasury may provide additional assistance. In providing assistance to the auto industry, Treasury identified goals and objectives and took steps to protect the government's interest. Provisions to protect the government's interest include requiring automakers to submit periodic financial reports and to gain concessions from stakeholders such as the UAW, creditors, and bondholders. To date, however, Chrysler and GM have not reached agreements with these stakeholders. In addition, Treasury included provisions to secure collateral from the automakers. However, because many of Chrysler's and GM's assets were already encumbered by other creditors, the amount of assets on which Treasury could secure senior liens was limited. An additional area of risk is the financial health of the automakers' pension plans. In the event that Chrysler or GM cannot continue to maintain its pension plans--such as in the case of liquidation--the Pension Benefit Guaranty Corporation, a government corporation, may be required to take responsibility for paying the benefits for the plans, which are not fully funded. GAO's panel of individuals with auto industry expertise identified a number of factors for achieving viability, including reducing the number of brands, reassessing the scope and size of dealership networks, reducing production capacity and costs, and obtaining labor concessions. However, Chrysler's and GM's restructuring plans submitted in February do not fully address these factors, according to GAO's panelists. In its assessment of the plans, Treasury identified concerns similar to those identified by the panelists, and concluded that Chrysler and GM need to establish a new strategy for long-term viability in order to justify a substantial additional investment of federal funds. Achieving viability is made more difficult because of many additional challenges facing the automakers, some of which are outside their control--such as the weak economy and the limited availability of credit. The condition of the U.S. economy will likely continue to affect the financial health of Chrysler and GM, as historically automobile sales almost always decrease during periods of economic recession. Given these challenges, Treasury, Chrysler, and GM are considering a range of options available for the automakers to achieve viability, including restructuring under the bankruptcy code. |
We have identified numerous challenges related to the government’s management of its real property, including issues pertaining to using and disposing of underutilized and excess property, an overreliance on leasing, and having unreliable real property data to support decision making.management after we designated it high risk in 2003. However, it has not yet fully addressed the underlying challenges that hamper reform, such The government has made progress reforming real property as those related to environmental cleanup and historic preservation, a lack of accurate and useful data to support decision making, and competing stakeholder interests that make it difficult to dispose of real property. In the meantime, the federal government continues to retain more real property than it needs. To address the excess and underutilized property the government holds, previous and current administrations have implemented a number of cost savings initiatives associated with better managing real property. For example, in May 2011, the administration proposed legislation—the Civilian Property Realignment Act (CPRA)— which, among other things, would have established a legislative framework for consolidating and disposing of civilian real property as a means of generating savings to the federal government. Although CPRA and other real property reform legislation introduced in the previous session of Congress have not been enacted, according to the President’s budget request for fiscal year 2014, the administration will continue to pursue enactment of CPRA. Most recently, OMB issued guidance for implementing the administration’s Freeze the Footprint policy, which requires agencies to document their efforts to restrict any growth in the size of their domestic office and warehouse inventories. The June 2010 presidential memorandum required federal agencies to achieve $3 billion in cost savings by the end of fiscal year 2012 from increased proceeds from the sale of assets; reduced operations, maintenance, and energy expenses from asset disposals; or other efforts to consolidate space or increase occupancy rates in existing facilities, such as ending leases or implementing telework arrangements. Agency actions taken under the memorandum were to align with and support previous administration initiatives to measure and reduce greenhouse gas emissions in federal facilities and consolidate data centers. The memorandum also required the Director of OMB, in consultation with the Administrator of GSA and the Federal Real Property Council (FRPC)—an interagency group responsible for coordinating real property management—to develop guidance for actions agencies should take to carry out the requirements of the memorandum. In July 2010, OMB issued guidance that identified specific steps agencies could take to meet the requirements of the June 2010 memorandum. For example, the guidance required agencies to develop a Real Property Cost Savings and Innovation Plan that was to identify the real property cost-savings initiatives under way and planned by the agency, the agency’s proposed share of the $3-billion savings target, and actions to achieve the proposed target. The guidance specified that the $3 billion in real property cost savings by the end of fiscal year 2012 would be measured through 1) capturing eliminated operating costs; 2) increasing the income generated through disposals; and 3) better utilizing existing real property by undertaking space realignment efforts, including optimizing or consolidating existing space within owned buildings. The agency cost savings were to reflect net savings, factoring in the costs incurred by the agency to achieve the intended result. After agencies developed their initial cost savings plans, OMB established four cost-savings categories in 2011 that agencies were to use for reporting savings: disposal, space management, sustainability, and innovation (see table 1). OMB used the administration’s Performance.gov website to track agencies’ reported savings; the website also listed individual agency’s cost savings targets as a share of the $3-billion cost- savings goal and the cost-savings measures agencies planned to implement to achieve their targets. As stated previously, we have identified problems with the estimates from selected agencies to meet their savings targets. Overall, agencies reported $3.8 billion in cost savings from fiscal year 2010 to fiscal year 2012 across the OMB categories of disposal, space management, sustainability, and innovation. The largest cost savings were from space management activities, which accounted for more than half of the total savings reported. Civilian agencies reported $3.1 billion in cost savings over the fiscal year 2010-to-2012 time period, and DOD accounted for the remainder of the savings reported. The six selected agencies we reviewed (GSA, DHS, DOE, DOJ, State, and USDA) accounted for $2.3 billion, or 74 percent, of the total savings reported by civilian agencies. Similar to the savings reported by all agencies, the six agencies we reviewed also reported the majority of savings from space management activities. Specifically, space management activities accounted for 70 percent of the savings reported by the six agencies, followed by disposal (17 percent), innovation (9 percent) and, sustainability (4 percent). Table 2 summarizes the cost savings reported by category for all agencies and the six selected agencies. The overall savings reported by the agencies we reviewed ranged from $238 million reported by DHS to $580 million reported by DOE. All six agencies reported savings from space management activities, five agencies reported disposal and sustainability savings, and two agencies reported innovation savings (see fig. 1). All of the agencies in our review determined their reported savings by identifying activities that were under way or planned at the time the June In particular, the requirements of the 2010 memorandum was issued.memorandum and subsequent guidance issued by OMB specified that agencies were to report savings from ongoing and planned activities. For example, the June 2010 memorandum specified that agency actions should align with and include activities undertaken in response to two previous initiatives meant to improve the performance of federal facilities. As such, USDA officials told us that they reported sustainability savings identified in the agency’s Strategic Sustainability Performance Plan required by a previous executive order. State officials told us they reported savings from data center consolidations carried out under a previous presidential initiative. In addition, the subsequent guidance issued by OMB in July 2010 also stated that agencies were expected to focus on real property cost savings initiatives under way and planned in developing their Real Property Cost Savings and Innovations Plans. As a result, for example, DOE officials stated that they did not identify any new cost savings to meet their cost-savings target and DOJ officials told us they obtained information from their bureaus about projects already planned or ongoing to identify the “low-hanging fruit” for potential cost savings. Based on our discussions with agency officials in our review, we identified two additional factors that led agencies to report savings from ongoing and planned activities: the individual cost-savings targets established for each agency, and the timeframes set forth by the memorandum. Cost savings targets: Individual cost savings targets played a role in how the agencies in our review determined their reported savings. Agency officials told us that they developed initial targets, as required by the July 2010 OMB guidance, by estimating the savings that could be derived from activities planned or underway at the time the memorandum was issued. However, according to agency officials, OMB subsequently increased the targets in 2011. OMB staff told us that the revised targets were meant to be realistic and also to encourage agencies to think beyond the traditional savings associated with real property. To assist agencies in identifying additional savings areas, OMB developed a best practices document that highlighted various types of savings that could be reported consistent with the requirements of the June 2010 memorandum. Most of the agency officials in our review told us that they did not have difficulty in meeting their revised targets after having discussions with OMB about the variety of savings that could be included. However, two agencies in our review, GSA and DHS, reported savings that fell short of their savings targets. According to GSA officials, the agency was conservative in reporting its overall savings achieved and only reported savings that could be supported by documentation that, according to GSA, were in the spirit of the memorandum. DHS officials told us that its revised savings target was not realistic in terms of the savings the agency could achieve in the 2-year timeframe established by the memorandum. Officials from USDA told us that once they had exceeded their cost savings target, they did not consider other areas for reporting potential savings that might have been achieved. Table 3 highlights the initial savings targets that the agencies proposed in their cost savings plans, compared to the savings targets that were established on Performance.gov and the savings that were ultimately reported. Time frames: Officials from some of the agencies in our review also told us that the time frames set forth by the memorandum drove them to report savings from activities that were already planned or under way. For example, DHS officials told us that the disposal savings they reported were from disposals that occurred during the 3-year time period specified in the memorandum, but were planned before the June 2010 memorandum was issued. DHS officials told us that, on average, disposals take 3 to 5 years to accomplish. Similarly, GSA officials told us that they use a tier system to evaluate the condition of their assets and place into the disposal category those assets that the agency plans to dispose of in the next 5 years. Thus, some of the disposal savings GSA reported were from assets it had already planned to dispose of at the time the June 2010 memorandum was issued and that were subsequently disposed of by the end of fiscal year 2012. In addition, given that it takes several years for savings from real property initiatives to be realized, agency officials told us that the timeframes established by the memorandum made it more likely for savings to be reported in certain categories over others. For example, GSA officials told us that when the memorandum was first issued there was an expectation that the largest cost savings would be reported from disposals, but this did not transpire in part because of the time it takes to dispose of properties. Agency officials also told us, and we have found in prior work, that the costs associated with disposals are often significant, making it difficult to realize disposal savings in a short time period. Similarly, we found that agencies did not report a large amount of innovation savings over the time period covered by the memorandum compared to other categories. Agency officials in our review told us that savings from innovation activities, particularly those resulting from telework initiatives, will increase in the future as telework is implemented more widely. For example, DHS officials told us that while they only reported $2 million in innovation savings stemming from their headquarters’ flexible workspace initiative over the 2010-to-2012 time period, the agency expects to achieve greater departmentwide savings starting in 2013 as the initiative is more widely implemented. Finally, agency officials in our review told us that reporting savings from cost avoidance measures—those savings that resulted because a planned action did not take place—was necessary to meet their targets in the timeframe required by the memorandum. For example, agencies reported space management savings as a result of not pursuing an approved lease prospectus for additional space or from reduced budgets for planned real estate activities, in addition to savings that were the result of consolidating space or terminating leases. The following examples illustrate some of the largest cost avoidance and savings measures reported by our selected agencies: DOE reported $412 million in space management savings based on funds related to real property expenditures it would have requested in its fiscal year 2011 and 2012 budgets for the Yucca Mountain Nuclear Waste Repository project (Yucca Mountain). DOE had terminated its licensing efforts and shut down the project in 2010. In addition, DOE officials told us that after their initial savings target was increased, they included deferred maintenance eliminated by disposals in their reported cost savings. DHS reported $126 million in space management savings from not pursuing a lease prospectus for 1.8-million square feet in new building space to accommodate employees the agency anticipated hiring. State reported $80 million in innovation savings over the fiscal year 2010-to-2012 time period from property exchanges, in which the agency swaps one of its properties to acquire another property. State also reported $58.2 million in space management savings because the agency was appropriated less than what it requested in its 2010 and 2011 budgets for a particular account used for security, rehabilitation, and repairs at its facilities. State included savings from the property exchanges and funding received that was lower than its budget request after their initial savings target was increased. USDA reported $229 million in space management cost savings from funds that Congress rescinded from the agency’s appropriations for 55 construction projects for Agricultural Research Service buildings and facilities. For example, $17 million in previously appropriated funds were rescinded for a research laboratory in Pullman, Washington, and about $16 million was rescinded for a national plant and genetics security center in Columbia, Missouri. USDA officials told us these project rescissions were included in the agency’s reported savings after its initial savings target was increased by OMB. The guidance OMB provided to agencies for implementing the requirements of the June 2010 memorandum was unclear and lacked reporting standards. The unclear guidance led the agencies in our review to interpret the guidance differently and report savings inconsistently. Specifically, the guidance did not establish common ground rules, such as a clear definition of the term “cost savings,” that according to our cost estimating and assessment guide, help ensure that data are consistently collected and reported. In particular, agency officials in our review told us that there was some uncertainty about the types of savings that could be reported, particularly whether cost avoidance measures could be reported, for example: GSA officials told us that the OMB guidance was not specific about whether cost avoidance measures could be included in the reported savings. These officials stated that this was a challenge in determining the cost savings that could be reported in response to the June 2010 memorandum. State officials also told us they were initially unsure whether they could report the cost avoidance associated with the previously mentioned reduction in their budget as savings, as well as savings from value-engineering improvements. DOJ officials told us there was uncertainty about whether or not cost avoidance savings could be included and whether to include only those savings that were actual budgetary savings, or if savings that were reprogrammed for other purposes could also be included. Although some agency officials in our review told us that the guidance was not clear on what could be considered a savings, all of the agencies in our review reported savings from cost avoidance measures, as previously discussed. In addition, the guidance and categories established by OMB on Performance.gov were broad. Agency officials in our review told us that they worked with OMB staff to understand the types of savings that could be reported under these categories. However, the categories lacked specific detail and standards for how the savings should be determined and reported to help ensure reliability. For example, for the disposal category, agencies were to report operations and maintenance costs avoided during the fiscal year 2010-to-2012 time period. However, it did not specify for how long agencies were supposed to capture these costs. As a result, the five agencies in our review that reported disposal savings made their own assumptions about the length of time in which to report savings from eliminated operations and maintenance costs. For disposals in the year 2010, for example, some agencies reported 1 year of operations and maintenance savings in the year in which the disposal occurred, whereas other agencies reported up to 3 years of operations and maintenance savings for disposals occurring in 2010 (see table 4). USDA officials told us that they believed it would not be fair to count more than 1 year of operations and maintenance savings for each of their disposal properties, whereas DOE officials told us that they reported up to 3 years of annualized operations and maintenance savings if a property was disposed of in fiscal year 2010 because, as discussed previously, OMB’s overall guidance encouraged agencies to look for savings from fiscal year 2010 through 2012. Similarly, OMB guidance did not specify whether agencies could report cost savings from deferred maintenance. We found that two of the five agencies—DOE and GSA—reported the eliminated deferred maintenance or repair and alteration costs associated with their disposals while three agencies did not. We also found instances where agencies reported similar types of savings in different categories. For example, savings associated with eliminating leases were included in the space management category on Performance.gov and we found that State reported them as such; however, we found that DHS reported savings from eliminating leases as disposal savings. Similarly, GSA reported savings from property exchanges under space management, while State reported this type of savings under innovation. The OMB guidance did not specify how these types of savings were to be reported. Our guide for assessing the reliability of data identifies consistency as a key component of reliability. In particular, consistency is important for ensuring that data are clear and well defined enough to yield similar results in similar analyses. However, as the previous examples illustrate, the lack of detailed standards and use of broad cost-savings categories led agencies in our review to interpret the guidance differently and report cost-savings information inconsistently. OMB staff told us that the cost- savings categories established on Performance.gov were intentionally broad to encourage innovation in the types of savings that could be achieved through better management of real property. However, the inconsistencies we identified make it difficult for the reported savings to have collective meaning that is reliable for decision-makers. In addition to interpreting the OMB guidance for implementing the June 2010 memorandum differently, we also found several instances in which agencies’ reported savings did not meet the requirements of the memorandum and guidance. For example, OMB’s guidance specifically stated that agencies should report the net savings, which factor in costs to achieve savings, in their overall savings total. Despite this, we found instances in which some agencies did not deduct costs in their reported savings, for example: State and DHS did not deduct costs associated with disposals in their reported savings.with the approximately $114 million in disposal savings reported over the 2010-to-2012 time period were about $4 million. DHS officials told us that costs were not deducted in the demolition of DHS-owned State officials told us that the costs associated assets. DHS reported $565,000 in annual operating-cost savings from 54 demolitions in fiscal year 2011 and almost $2 million in annual operating-cost and rent savings from 245 demolitions in fiscal year DHS officials did not know the costs associated with these 2012.demolitions. DOE deducted the costs associated with some of its reported disposal savings, but did not do so if the disposals were carried out by its Office of Environmental Management. DOE officials told us that after discussions with OMB staff, they decided not to deduct the costs associated with disposals carried out by this program office because of its mission to deactivate and decommission contaminated facilities. DOE estimated in its initial cost savings plan that including these implementation costs would have resulted in a net loss of almost $900 million for the agency’s disposals over the fiscal year 2010-to-2012 time period. DHS reported $2 million in innovation savings from reducing space due to a pilot flexible workspace initiative but, according to DHS officials, inadvertently did not deduct the one-time costs associated with reconfiguring the space in its overall reported savings. According to DHS officials, the one-time costs to reconfigure the space would have equaled 75 percent of the 1-year savings the agency has realized. We also found instances where agencies reported cost savings outside of the fiscal year 2010-to-2012 time period required by the June 2010 memorandum, for example: GSA reported $50 million in space management savings from purchasing a building in 2012 that the agency previously leased. GSA reported these savings based on the purchase price, which was $50 million less than the most recent appraisal of the building. Although GSA expects to realize savings over time from purchasing this building instead of leasing it, it is unclear that the difference between the purchase price and the appraised value would represent savings, if for example, no buyers are willing to pay the appraised value. Furthermore, while we have found that ownership is often more cost effective than leasing in the long term, GSA would have realized only a small fraction of the savings related to ownership that would have accrued during the timeframe established by the June 2010 memorandum. Similarly, GSA reported $10 million in space management savings from a property exchange with the City of San Antonio, in which the agency exchanged a courthouse and training facility for a parcel of land to construct a new courthouse. However, GSA retained ownership of the city’s site in 2013 and the city will retain ownership of the GSA property after the construction of the new courthouse is completed, which has yet to be determined. GSA officials told us that they reported these savings in response to the June 2010 memorandum because the agreement to enter into the exchange occurred in 2012. USDA reported rent savings from office closures, some of which did not occur until fiscal year 2013. According to USDA officials, some of the office closures that had been planned for fiscal year 2012 were delayed and did not occur until fiscal year 2013. These 21 office closures accounted for about $4 million of the savings reported by the agency. DOJ reported more than $2 million in savings from consolidating six community corrections offices and the National Institute of Corrections Academy, which, according to Bureau of Prisons officials, took place in 2007 and 2008. Officials from the Bureau of Prisons stated that the reported savings were based on the estimated rent or lease amounts the agency would have incurred in the time period covered by the memorandum through renewed agreements, had the consolidations not occurred. Finally, we found instances where some agencies in our review reported savings from non-real estate activities in their totals. For example, DHS included $30,000 in reduced transit benefits in the $2 million in innovation savings it reported from increased telework due to its flexible workspace initiative, and GSA reported $11.6 million in sustainability savings from a reduction in its fiscal year 2012 budget for travel costs and building studies. GSA officials told us that building studies involve on-site inspections, and therefore require travel, and that GSA considers decreases in travel and building studies both economically and environmentally sustainable. However, it is unclear how these savings relate to reducing energy use for the agency’s assets. OMB staff told us that the savings reported by the agencies should have been tied to real property, and that if an activity was the result of a real estate action, then the savings was justifiable. The guidance issued by OMB was specific to implementing the June 2010 memorandum for achieving $3 billion in real property cost savings, an initiative which was completed as of September 30, 2012. We also found that the documentation of agencies’ reported savings was limited because OMB did not establish specific standards that required agencies to provide detailed information in support of their reported cost savings or identify how OMB planned to review the savings agencies reported. According to our cost-estimating and assessment guide, validating cost estimates, including savings estimates, is a best practice for ensuring that estimates are well-documented, comprehensive, accurate, and credible. However, OMB staff told us that they did not have the resources to review a detailed accounting of agencies’ reported savings and, instead, required agencies to provide quarterly summaries highlighting the savings the agencies planned to report along with any success stories of unique savings examples. In addition, OMB staff told us they were in constant communication with the agencies about their reported savings, as well as with other OMB staff knowledgeable about the agencies’ budgets and programs, to ensure that the reported savings met the requirements of the memorandum. In reviewing the information provided by the agencies, OMB staff told us they had identified instances where agencies reported savings that did not meet the requirements of the June 2010 memorandum—for example one agency reported savings that occurred outside the timeframes established in the memorandum— and adjusted the agency’s overall savings on Performance.gov accordingly. However, OMB staff also told us that it may be more efficient to obtain detailed documentation of an agency’s reporting upfront, to limit the amount of follow-up required. In addition, OMB did not include detailed information about the types of savings agencies reported in response to the memorandum on Performance.gov. For example, Performance.gov summarizes the total cost savings reported by each agency in each of the cost savings categories, and includes general information about the types of activities agencies reported as savings, but does not include specific information about the types of savings that were included in the totals. As a result, the overall transparency of information on Performance.gov is limited for understanding the types of savings agencies reported across the categories. Our cost estimating and assessment guide has shown that a key factor for ensuring the reliability and transparency of cost estimates, including savings estimates, is that they include an appropriate level of detailed documentation, including source data, so that they can be recreated, updated, or understood. As part of this review, we obtained more detailed documentation supporting the agencies’ reported savings, which allowed us to identify the issues illustrated in this report and understand the types of savings agencies reported to meet the requirements of the memorandum. Requiring more detailed documentation and establishing a more systematic process for reviewing and validating the reported cost savings could have allowed OMB to identify, in a timely manner, some of the reporting inconsistencies that resulted, while also ensuring that the savings met the requirements of the memorandum. Furthermore, including more detailed information on Performance.gov could have enhanced the transparency of actions agencies took to generate and report savings in response to the memorandum. The June 2010 memorandum had positive effects in the view of agency officials. For example, some agency officials told us that the memorandum allowed them to accelerate projects in their pipeline or gave them a stronger basis to encourage savings opportunities within their agencies. Agency officials also told us that the memorandum provided them with a better understanding of cost savings opportunities within their agencies, particularly those agencies wherein real property- management decisions are decentralized, stating that it allowed them to have a more comprehensive view of opportunities to collectively improve the agency’s real property footprint. Finally, some agency officials cited improved collaboration among agencies and with OMB on real property issues. OMB staff told us that the memorandum served as an important first step for informing future real property reform efforts and that this initiative laid the groundwork for thinking about real property more holistically as a management tool. In particular, OMB staff said that the memorandum encouraged agencies to think more creatively about ways in which real property can be reformed to generate savings, not just from a budgeting perspective, but through more innovative uses of real estate. In discussions on our findings, OMB staff stated that while they initially did not want to be too prescriptive in how agencies responded to meeting the requirements of the memorandum, they recognized that more detailed guidance, as well as, more refined cost-savings categories and metrics are likely needed in future real property cost savings initiatives. Furthermore, OMB plans to use the lessons learned from this initiative to emphasize program outcomes and increase the transparency of future real property reform efforts. The current fiscal climate and emphasis on good management practices will continue to place pressure on federal agencies to find additional opportunities for cost savings. Better managing the government’s real property footprint through disposing of excess property and managing existing assets more efficiently will play a role in efforts to realize such savings. Although the cost savings initiative established by the June 2010 memorandum is now complete, recent initiatives, like OMB’s Freeze the Footprint policy and proposed CPRA legislation place an emphasis on generating space and cost savings to the federal government. As agencies continue to identify ways to improve the management of their real property in response to these and other initiatives, such as increasing telework, it is critical to ensure that any savings reported as a result of such improvements are meaningful and transparent. However, as our review has demonstrated, clear and specific standards are needed to ensure that savings data are consistently reported and reviewed so that they are sufficiently reliable and transparent enough to document performance and support decision making. Without more detailed standards for identifying and reporting cost savings across federal agencies, decision-makers will be ill equipped not only to assess the results of real property reform efforts in the federal government, but also to take actions that will maximize these savings in the future. To improve future real property cost-savings initiatives and promote reliability and transparency, we recommend that the Director of OMB, in collaboration with FRPC agencies, develop clear and specific standards for: identifying and reporting savings that help ensure common understanding of the various types of cost savings; consistently reporting across categories and agencies; and sufficiently documenting, validating, and reviewing results. We provided a draft of this report to OMB, DHS, DOE, DOJ, GSA, State, and USDA for review and comment. OMB generally agreed with our recommendation. Specifically, OMB stated that the June 2010 memorandum had positive effects on federal real-property management, and acknowledged that there are opportunities to improve future cost- savings efforts, as identified in our report. OMB stated that our recommendation was generally reasonable as it applies to prospective initiatives that directly address cost savings. DHS, DOE, and GSA provided technical comments that we incorporated as appropriate. DOJ, State, and USDA had no comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Director of OMB; the Administrator of GSA; the Assistant Attorney General for Administration, Department of Justice; and the Secretaries of Homeland Security, Energy, State, and Agriculture; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Our review focused on the administration’s June 2010 memorandum that directed federal agencies to achieve $3 billion in real property cost savings by the end of fiscal year 2012. Our objectives were to (1) describe the cost savings agencies reported in response to the June 2010 presidential memorandum and how those savings were identified by selected agencies and (2) determine the extent that selected agencies’ reporting of savings was reliable and transparent and how, if at all, the reporting of real property cost savings could be improved. To address these objectives, we reviewed the June 2010 memorandum and subsequent guidance issued by the Office of Management and Budget (OMB) to understand the requirements of the memorandum, including the types of savings that could be reported and how those savings were to be reported. We also reviewed our prior work on excess and underutilized property to understand issues previously identified with agencies’ reported cost savings. To describe the cost savings agencies reported in response to the June 2010 memorandum and how those savings were identified by selected agencies, we reviewed and analyzed information on the administration’s Performance.gov website, including agencies’ individual cost savings targets, the total amount of savings reported by the agencies at the end of fiscal year 2012, and the amount of savings reported across the four categories—disposals, space management, sustainability, and innovation—established by OMB. We also obtained and analyzed documentation on the cost savings reported by six civilian agencies: the General Services Administration and the Departments of Agriculture, Energy, Homeland Security, Justice, and State. In particular, we reviewed the agencies’ Real Property Cost Savings and Innovation Plans developed in response to the June 2010 memorandum and documentation supporting the cost savings reported by each of the agencies. We also conducted in-depth interviews with officials from these agencies to understand the processes they used to identify and report cost savings over the 2010 to 2012 time period. We conducted interviews with OMB staff about the types of savings agencies reported and obtained documentation on the savings our selected agencies reported to OMB. We compared the information on Performance.gov to the documentation provided to us by each of the agencies and to the documentation that the agencies submitted to OMB to identify and reconcile any discrepancies, but did not systematically evaluate or verify the methods agencies reported undertaking to achieve savings, as that was outside the scope of our review. Based on our review of agency documents and interviews with officials, we determined that the data were reliable for the purpose of describing cost savings as reported by the six agencies. We selected the six agencies because they had the largest cost savings targets for civilian agencies, collectively accounting for about 75 percent of the $3 billion savings goal; reported a variety of cost savings measures to achieve their savings target; and had a range of property types in their real property portfolios. To determine the extent that selected agencies’ reporting of savings was reliable and transparent and to identify how, if at all, reporting of real property cost savings could be improved, we reviewed the agencies’ reported cost savings against key factors identified in our data-reliability and cost-estimating guidance.reported by the six agencies in our review to determine whether similar types of savings were consistently reported, met the requirements set In particular, we analyzed the savings forth by the memorandum, and were well-documented. For example, we analyzed the savings the selected agencies reported in each of the categories established on Performance.gov to determine whether the agencies consistently determined the amount of savings reported within each of the categories and whether the agencies reported similar types of savings in the same categories. We also analyzed the savings reported by the selected agencies to determine whether they occurred within the time frames required by the memorandum, included the costs to implement the savings measure, and were tied to a real estate action. Finally, we reviewed the documentation the selected agencies provided to OMB to determine whether the information was clear and detailed enough to support their reported savings and to understand how OMB reviewed the savings reported to ensure they were reliable and met the requirements of the memorandum. We conducted in-depth interviews with officials from our selected agencies as well as OMB staff to further understand how they determined the cost savings reported over the 2010 to 2012 time period, challenges to meeting the requirements of the memorandum, and how similar efforts could be improved in the future. We conducted this performance audit from December 2012 to October 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. David J. Wise, (202) 512-2834 or [email protected]. In addition to the contact named above, David Sausville, Assistant Director; Russell Burnett, Kathleen Gilhooly; Nancy Lueke; Nitin Rao; Amy Rosewarne; and Jack Wang made key contributions to this report. | In June 2010, the President issued a memorandum directing federal agencies to achieve $3 billion in real property cost savings by the end of fiscal year 2012 through a number of methods, including disposal of excess property, energy efficiency improvements, and other space consolidation efforts. GAO was asked to review the cost savings agencies reported in response to the memorandum. This report (1) describes the cost savings agencies reported in response to the June 2010 presidential memorandum and how those savings were identified by selected agencies and (2) determines the extent that selected agencies' reporting of savings was reliable and transparent, and how, if at all, reporting of real property cost savings could be improved. GAO reviewed OMB guidance for implementing the memorandum, reviewed the cost savings agencies reported on the administration's Performance.gov website, and obtained documentation from and interviewed officials from six agencies and OMB staff about the agencies' reported cost savings. GAO selected the agencies based on their overall cost-savings targets and the types of savings measures implemented, among other things. Agencies reported real property cost savings of $3.8 billion in response to the June 2010 presidential memorandum from disposal, space management, sustainability, and innovation activities. Space management savings, defined by the Office of Management and Budget (OMB) as those savings resulting from, among other things, consolidations or the elimination of lease arrangements that were not cost effective, accounted for the largest portion of savings reported by all agencies, and for about 70 percent of the savings reported by the six agencies GAO reviewed--the General Services Administration (GSA) and the Departments of Agriculture (USDA), Energy (DOE), Homeland Security (DHS), Justice (DOJ), and State (State). The requirements of the memorandum, as well as agencies' individual savings targets and the time frame for reporting savings, led the selected agencies to primarily report savings from activities that were planned or under way at the time the memorandum was issued. GAO's review of the six selected agencies identified several problems that affect the reliability and transparency of the reporting of cost savings in response to the June 2010 memorandum. In particular, the memorandum and subsequent guidance issued by OMB were not clear on the types of savings that could be reported, particularly because the term "cost savings" was not clearly defined. For example, officials from several agencies GAO reviewed said the guidance was unclear about whether savings from cost avoidance measures could be reported. In addition, the agencies interpreted the guidance differently and, in some cases, did not follow the guidance, practices that led to inconsistent reporting, for example: Agencies made different assumptions in reporting disposal savings: Two agencies reported one year of avoided operations and maintenance savings for the year in which the disposal occurred, while three agencies reported up to 3 years of savings depending on when disposals occurred during the 3-year period. Some agencies did not deduct costs associated with their disposals: State and DHS did not deduct the costs associated with their reported disposal savings. DOE deducted costs for some of its reported disposals savings, but did not deduct costs for disposals carried out by its Office of Environmental Management. Some agencies reported savings outside the time frame of the memorandum: GSA reported savings from a property exchange, but retained ownership of the site in 2013, after the deadline, fiscal 2012's end. USDA reported savings from office closures that occurred in fiscal year 2013. Finally, OMB did not require agencies to provide detailed documentation of their reported savings or include specific information about agencies' reported savings on Performance.gov, limiting transparency. Agency officials stated that the memorandum broadened their understanding of real property cost-savings opportunities. However, establishing clearer standards for identifying and reporting savings would improve the reliability and transparency of the reporting of cost savings and help decision-makers better understand the potential savings of future initiatives to improve federal real-property management. GAO recommends that the Director of OMB establish clear and specific standards to help ensure reliability and transparency in the reporting of future real-property cost savings. OMB generally agreed with GAO's recommendation. |
FCA is an independent federal regulatory agency responsible for supervising, regulating, and examining institutions operating under the Farm Credit Act of 1971, as amended. The act also authorizes FCA to assess the institutions it regulates to provide funds for its annual operating costs and to maintain a reserve amount for contingencies, as applicable.FCA regulations allow several methods for FCA to assess and apportion its administrative expenses among the various types of institutions it oversees. These institutions include primary market institutions (banks and associations) and related entities that collectively comprise the System, in addition to Farmer Mac (a secondary market entity). As of September 30, 2000, the System (excluding Farmer Mac) included 172 institutions holding assets of about $91 billion; Farmer Mac’s assets were about $3 billion. The System is designed to provide a dependable and affordable source of credit and related services to the agriculture industry. FCA regulates and examines Farmer Mac, the secondary agricultural credit market entity, through the Office of Secondary Market Oversight (OSMO), which is an independent office with a staff of two within FCA.Figure 1 depicts the regulatory relationships among FCA, OSMO, the System, and Farmer Mac. Farmer Mac was created to provide a secondary market to improve the availability of agricultural and rural housing mortgage credit to lenders and borrowers. Both the System and Farmer Mac are government-sponsored enterprises (GSE). Although FCA does not receive any funds from the U.S. Treasury for its operating budget, its annual budget is subject to the annual congressional appropriations process, which limits the dollar amount that the agency can spend on administrative expenses. For 2000, that amount was $35.8 million. FCA raises operating funds from several sources, but most of these funds are from assessments on the institutions that it regulates. Assessments accounted for about 94 percent (including 2 percent for Farmer Mac) of the funding for the FCA’s 2000 operating budget, with the balance coming from reimbursable services, investment income, and miscellaneous income (see fig. 2). FCA officials define administrative expenses as generally comprising personnel compensation, official travel and transportation, relocation expenses, and other operating expenses necessary for the proper administration of the act. FCA also has reimbursable expenses, which include the expenses it incurs in providing services and products to another entity. The five other federal financial regulators discussed in this report have oversight responsibility for various types of institutions. Table 1 shows these regulators, along with the types of institutions that they regulate. For purposes of comparison, we group the regulators into two categories according to the types of market primarily or exclusively served by the institutions they regulate, primary and secondary market entities. Of the five regulators, four—FHFB, NCUA, OCC, and OTS—regulate primary market institutions. OFHEO regulates secondary market entities. FHFB regulates the 12 Federal Home Loan Banks (FHLBanks) that lend on a secured basis to their member retail financial institutions. Under certain approved programs and subject to regulatory requirements, the FHLBanks also are authorized to acquire mortgages from their members. By law, federal financial regulators are required to examine their regulated institutions on a periodic basis (e.g., annually). The primary purpose of these supervisory examinations is to assess the safety and soundness of the regulated institution’s practices and operations. The examination process rates six critical areas of operations—capital adequacy (C), asset quality (A), management (M), earnings (E), liquidity (L), and sensitivity to market risk (S), or CAMELS. The rating system uses a 5-point scale (with 1 as the best rating and 5 as the worst rating) to determine the CAMELS rating that describes the financial and management condition of the institution. Examiners issue a rating for each CAMELS element and an overall composite rating. The results of an examination, among other things, determine the extent of ongoing supervisory oversight. To varying degrees, the regulators also have responsibility for ensuring their institutions’ compliance with consumer protection laws. Moreover, two GSE regulators (FCA and FHFB) have responsibilities for ensuring compliance with their respective GSEs’ statutory missions. Mission and safety and soundness oversight for Fannie Mae and Freddie Mac are divided. The Department of Housing and Urban Development has general regulatory authority over Fannie Mae and Freddie Mac to ensure compliance with their missions, while OFHEO has the authority for safety and soundness regulation. To meet the first objective, we examined agency budget reports and financial documents and interviewed FCA and Farmer Mac officials. We compared FCA’s reported actual administrative expenses (total operating expenses less reimbursable costs) with congressionally imposed limits; reviewed relevant statutes, legislative history, FCA regulations, and FCA legal opinions; and developed a 5-year trend analysis. To address the second objective, we interviewed agency officials, reviewed relevant statutes and regulations, and analyzed data on operational funding obtained from FCA and the five other federal financial regulatory agencies. We selected these five agencies because they use funding mechanisms that are similar to FCA’s to support their operating budgets. We did not independently verify the accuracy of the data that the regulators provided or review any agency’s accounting records. We obtained comments on a draft of this report from FCA and the five other federal financial regulatory agencies. FCA’s comments are summarized at the end of this report. Except for OFHEO, all agencies provided technical comments, which we incorporated as appropriate. We conducted our work from January to July 2001 at FCA headquarters in McLean, VA, and at the headquarters of the other five regulators in Alexandria, VA, and Washington, D.C. We conducted our review in accordance with generally accepted government auditing standards. Over the last 5 years, FCA has reduced expenditures for administrative expenses, reflecting the agency’s success in controlling operating costs. Staff reductions—due, in part, to consolidation within the System—have accounted for most of the decline in administrative expenditures. While actual administrative expenditure amounts have varied from year to year, FCA has continued to operate below congressionally approved spending levels. Significant dollar decreases in personnel costs were largely responsible for the decrease in administrative spending and the 5.8 percent decline compared with the 8.59 percent growth rate in federal government expenditures. Despite increases in purchases of other contractual services and equipment, administrative costs remained below the 1996 level throughout the second half of the 1990s and into 2000 (see table 2). The decline was not spread evenly over the 5-year period (see fig. 3). Most of the decline occurred in 1996-98, and administrative spending has increased each year since then. For 2001, administrative expenditures are expected to rise by $852,000, or 2.6 percent, over their 2000 level, primarily because of rising costs for personnel, travel, and transportation. Our analysis of FCA data shows that personnel costs accounted for over 80 percent of the FCA administrative expenses during the 5-year study period. But these costs (staff salaries and benefits) also decreased the most in dollar and percentage terms during the period, falling by about $4.1 million (13 percent), and the share of personnel costs in administrative expenditures fell from 88.7 percent to 81.7 percent. Reductions in benefits were largely responsible for this decline; the amount spent on staff benefits dropped 36.3 percent, falling from $7.3 million in 1996 to $4.6 million in 2000. Decreases in the relocation allowances, severance pay, and buyouts necessitated by the consolidation of the System accounted for most of the decline. FCA officials told us that the number of employees fell almost 15 percent—from 331 in 1996 to 282 in 2000—in part, because of the industry consolidation. The number of institutions in the System dropped by 28 percent, declining from 239 in 1996 to 172 in 2000. For 2001, however, FCA projects personnel costs to increase by 5.3 percent to about $28.8 million. As a result, our analysis shows that these costs will continue to account for a substantial percentage of administrative costs. FCA officials attribute the increase to the rising cost of employee salaries and performance bonuses. Equipment purchases and other contractual services accounted for the largest increases in administrative expenditures in 1996 through 2000. Equipment purchases experienced the largest growth but fell behind contractual services in actual dollar increases. Equipment purchases rose about $1.1 million (from $395,000 in 1996 to $1.5 million in 2000), which was about a 268-percent increase over 1996. According to an FCA official, computer replacements and upgrades, which the agency undertakes every 3 years, accounted mostly for the increase. FCA officials expect equipment purchases to decline $202,000, or about 14 percent, in 2001. Other contractual services represented a growing percentage of FCA administrative costs, increasing from 2.8 percent in 1996 to 6.8 percent of the 2000 total. These expenses consisted mostly of consulting services for a new financial management system purchased from another government agency. They accounted for the largest dollar increase (about $1.3 million) and the second-largest percentage increase (about 130 percent) in administrative expenditures, climbing from $992,000 in 1996 to $2.3 million in 2000. For 2001, however, FCA expects this cost component to decline by $209,000, or 9.2 percent. Travel and transportation expenses declined (by about 10 percent) between 1996 and 2000. FCA officials told us the decrease was largely the result of a decline in the number of employee relocations. For 2001, FCA projects these costs to decrease by $231,000, or about 15 percent. All other expenses, a category that includes rent, communications, and utilities; printing and reproduction; supplies and materials; and insurance claims and indemnities, decreased by $79,000, or 8.3 percent, over the period, primarily because of decreases in supplies and materials. For 2001, FCA expects these costs to increase by 4.3 percent. Figure 4 shows FCA administrative expenses for 2000 by expense category. Each fiscal year, Congress sets a limit on the amount of money FCA can spend on administrative expenditures. However, Congress did not set a spending limit for 1996. For each year from 1997 to 2000, FCA was in compliance with its budget limits for administrative expenses (see table 3). FCA and the other federal financial regulators do not receive any federal money to fund their annual operating budgets, relying primarily on assessment revenue collected from the institutions they oversee. In general, the regulators assess institutions using either complex asset- based formulas or less complex formulas that are based on other factors, depending on the type of institution. The different funding methodologies are designed to ensure that each institution pays an equitable share of agency expenses. FCA uses two different methods of calculating assessments on the institutions it regulates—one for all primary market entities and the other for its secondary market entity, Farmer Mac. The methodology used for primary market entities, which is complex, is based on the institutions’ asset holdings and economies of scale as well as on the supervisory rating each institution received during FCA’s last periodic examination. The methodology used for Farmer Mac is less complex. FCA calculates the assessment on the basis of its own direct and indirect expenses, rather than on asset holdings. Direct expenses include the costs of examining and supervising Farmer Mac, while indirect expenses are the overhead costs “reasonably” related to FCA’s services. In general, the other federal financial regulators that regulate institutions similar to FCA’s use comparable methodologies to calculate assessments. The law requires that the assessments be apportioned “on a basis that is determined to be equitable by the Farm Credit Administration.” FCA’s current assessment regulations for banks, associations, and “designated other System entities” were developed in 1993 through the negotiated rulemaking process. Banks, associations, and the Farm Credit Leasing Services Corporation (Leasing Corporation) are assessed on the same basis (i.e., assets). According to an FCA official, the agency periodically reviews these rules but currently has no plans to modify them. FCA officials said that these rules are designed to equitably apportion the annual costs of supervising, examining, and regulating the institutions. For this reason, the methodology relies on asset “brackets” that are much like tax brackets and reflect economies of scale, since the costs of supervision rise as a regulated institution becomes larger; however, these costs do not increase as fast as asset growth. FCA “bills” the institutions annually, and the institutions pay their assessments on a quarterly basis. To calculate the assessments for banks, associations, and the Leasing Corporation, FCA first determines its annual operating budget, which could include a reserve for contingencies for the next fiscal year, then deducts the estimated assessments for Farmer Mac, other System entities, and any reimbursable expenses. What is left—the net operating budget—is the total amount that will be assessed. This amount is apportioned among the banks, associations, and the Leasing Corporation using a two-part formula. The net operating budget is divided into two components of 30 and 70 percent. (According to an FCA official, the 30/70 split was devised during the negotiated rulemaking process and represents the most equitable way to assess System institutions.) The first part of the assessment, covering 30 percent of the budget, is spread across institutions on the basis of each institution’s share of System risk-adjusted assets. For example, an institution whose assets equal 1 percent of System assets will have its assessment equal to 1 percent of this 30 percent of the FCA budget. The second part of an institution’s assessment is charged according to a schedule that imposes different assessment rates on assets over specified levels, with these marginal rates decreasing for higher levels of assets. For example, the assessment rate that an institution pays for its assets from over $100 million to $500 million is 60 percent of the assessment rate that it pays on its first $25 million in assets. Adding the 30-percent amount and the 70-percent amount together equals the general assessment amount. Table 4 shows the assessment rates for the eight-asset “brackets.” The assessment rates percentages are prescribed by FCA regulation. The general assessment may be subject to these adjustments: a minimum assessment fee, a supervisory surcharge, or both. The minimum fee of $20,000 applies only to institutions whose assessments are calculated at less than $20,000; these assessments are scaled upward, and no further charges are assessed. For institutions with assessments of more than $20,000, FCA may add a supervisory surcharge that reflects the institution’s financial and management conditions. The surcharge is based on the institution’s last supervisory examination rating. These ratings range from a high of 1 to a low of 5; a rating of 3, 4, or 5 can result in a surcharge ranging from 20 to 40 percent of the general assessment amount. The top-rated institutions (those rated 1 or 2) pay nothing over the general assessment. The variables in the formula allow FCA some flexibility in adjusting assessments to reflect its oversight costs. The formula not only reflects economies of scale but, by linking assessments with the financial and managerial soundness of the institutions, also seeks to ensure that the institutions that cost the most to supervise are paying their share. This approach relieves other entities within the System of bearing the cost of this additional oversight. FCA may adjust its assessments to reflect changes in its actual annual expenses and, if applicable, give institutions a credit against their next assessment or require them to pay additional assessments. Any credits are prorated on the basis of assessments paid by an institution. These credit adjustments are usually done at the end of the fiscal year. As required by law, FCA assesses Farmer Mac separately and differently from its primary market institutions. The law specifies that FCA’s assessment of Farmer Mac is intended to cover the costs of any regulatory activities and specifically notes a requirement to pay the cost of supervising and examining Farmer Mac. We could not identify any legislative history that addressed these provisions. FCA officials told us that they believed the difference between the statutory provisions for assessing banks, associations, and the Leasing Corporation and Farmer Mac is due to the difference in their assets—that is, unlike those institutions, Farmer Mac does not make loans. FCA developed the current assessment methodology for Farmer Mac in 1993. Farmer Mac’s assessment covers the estimated costs of regulation, supervision, and examination, but Farmer Mac is not assessed a charge for FCA’s reserve. The assessment includes FCA’s estimated direct expenses for these activities, plus an allocated amount for indirect or overhead expenses. In general, FCA uses the same estimated direct expenses and indirect expense calculations for Farmer Mac as for the “other System entities,” such as the Federal Farm Credit Banks Funding Corporation (Funding Corporation). Estimated direct expenses take into account the costs incurred in the most recent examination of Farmer Mac and any expected changes in these costs for the next fiscal year. We asked FCA officials if and how the assessment formula they use for Farmer Mac enables them to compensate for risks in Farmer Mac’s business activities. They explained that the amount assessed for direct expenses increases if additional examination time is needed. FCA officials also noted that, as their data show, direct costs can rise due to other factors. For example, from 1999 to 2001, FCA officials noted that they invested considerable resources in developing a risk-based capital rule for Farmer Mac. During this time, FCA incurred unique costs that increased Farmer Mac’s assessment for those years. A proportional amount of FCA’s indirect expenses—that is, those expenses that are not attributable to the performance of examinations—is allocated to Farmer Mac. This amount is calculated as a relationship between the budget for a certain FCA office and FCA’s overall expense budget for the fiscal year covered by the assessment. (The proportion for 2000 was 28.9 percent.) Multiplying the percentage by the estimated direct expenses attributable to Farmer Mac equals the amount of indirect expenses. The addition of the estimated direct expenses and indirect expenses equals the estimated amount to be assessed Farmer Mac for the fiscal year. Indirect expenses would include, for example, the cost of providing personnel services and processing travel vouchers for OSMO. At the end of each fiscal year, FCA may adjust its assessment to reflect any changes in actual expenses. Other entities in the Farm Credit System, such as the Funding Corporation, are assessed separately using a methodology similar to the one used for Farmer Mac. The assets of this group of institutions differ from those of the previously discussed entities that FCA regulates. These institutions are assessed for the estimated direct expenses involved in examinations, a portion of indirect expenses, and any amount necessary to maintain a reserve. FCA estimates direct expenses for each entity on the basis of anticipated examination time and travel costs for the next fiscal year. Allocations for indirect expenses are calculated as a percentage of FCA’s total budgeted direct expenses (excluding those for Farmer Mac) for the fiscal year of the assessment. As with its assessments of other entities in the System, FCA may adjust its assessments to reflect any changes in actual expenses at the end of the fiscal year. FCA and regulators of similar types of institutions use assessment formulas of varying complexity to assess the institutions they oversee. In general, they use relatively complex formulas for primary market institutions and less complex formulas for secondary market entities. FCA’s method for assessing banks, associations, and the Leasing Corporation, which are all primary market institutions, is similar to most other federal financial regulators (NCUA, OCC, and OTS) that oversee primary market institutions. Most of the regulators use complex formulas that take into account a variety of factors, including the regulator’s budget, the institution’s asset size and examination rating, and economies of scale (see fig. 5). Like FCA’s, these assessments generally include a fixed component that is based on an institution’s asset holdings, plus a variable component derived by multiplying asset amounts in excess of certain thresholds by a series of declining marginal rates. The assessment amount may then be adjusted on the basis of various factors—for example, the institution’s financial condition. Again like the FCA’s methodology, these formulas attempt to allocate regulatory costs in a way that reflects the agency’s actual cost of supervision. Institutions with a low examination rating pay an additional fee because they are likely to require more supervision than the top-rated institutions. NCUA and FHFB are the only regulators of primary market institutions that do not add a supervisory surcharge on the basis of an examination rating. However, NCUA does use a complex formula to determine an institution’s assessment amount, whereas FHFB uses a less complex formula. FHFB calculates assessments for the 12 FHLBanks on the basis of each bank’s total paid-in capital stock, relative to the total paid-in capital stock of all FHLBanks. FCA is the only primary market regulator that requires its institutions to pay a fixed minimum assessment amount (i.e., $20,000). Of the five other regulators we looked at, two—NCUA and OTS—reduce the assessments for qualifying small institutions. According to the report of the Assessment Regulations Negotiated Rulemaking Committee that developed the rule,the minimum assessment is required both to pay a share of FCA regulatory costs and as a necessary cost of doing business as a federally chartered System institution. The assessment methods of the two federal regulators that oversee secondary market entities are less complex than the methods applied to primary market institutions. For example, OFHEO’s method of assessing Fannie Mae and Freddie Mac, which is prescribed by law, is based on the ratio of each entity’s assets to their total combined assets. OFHEO does not regulate any other entities; thus, this simple formula readily meets the need to equitably apportion the agency’s operating costs. FCA administrative expenditures were lower in 2000 compared with 1996, due in part to reductions in staff because of System consolidation. Although administrative expenses are projected to increase for 2001 because of rising personnel and travel costs, they are expected to remain within the congressional spending ceiling. FCA is unique among federal financial institution regulators because it regulates both primary and secondary market entities. The methods FCA uses to assess the institutions it oversees are analogous to those used by virtually all of the regulators of similar institutions and are based on the types of assets the entities hold. FCA’s complex formula for assessing primary market institutions is comparable to the methods used by most regulators of other primary market institutions. These regulators oversee numerous entities of various sizes and complexities, and their complex assessment methods enable them to consider these attributes in assessing for the cost of examinations. The few secondary market entities, which include Farmer Mac, are all assessed using less complex methodologies. We received written comments on a draft of this report from the Chairman and Chief Executive Officer of FCA that are reprinted in appendix I. He agreed with the information presented in the draft report regarding FCA’s administrative spending between 1996 and 2000. FCA also provided technical comments that we incorporated where appropriate. The other federal financial regulators, except for OFHEO, provided technical comments on a draft excerpt of this report that we shared with them. We incorporated their technical comments into this report where appropriate. We are sending copies of this report to the Chairman of the Senate Committee on Agriculture, Nutrition, and Forestry; the Chairmen and Ranking Minority Members of the Senate Committee on Banking, Housing and Urban Affairs, the House Committee on Financial Services, and the House Committee on Agriculture; and Michael M. Reyna, Chairman and Chief Executive Officer of the Farm Credit Administration. The report will be available on GAO’s Internet home page at http://www.gao.gov. If you have any questions about this report, please contact me or M. Katie Harris at (202) 512-8678. Joe E. Hunter was a major contributor to this report. | The Farm Credit Administration (FCA) regulates the farm credit system. Administrative expenses, which accounted for about 97 percent of FCA's total operating expenses of $34.5 million in fiscal year 2000, are funded primarily by assessments on the institutions that make up the system, including the Federal Agricultural Mortgage Corporation (Farmer Mac). This report (1) analyses trends in administrative expenses for fiscal years 1996 through 2000 and (2) compares ways that FCA and other federal financial regulators calculate the assessments they need to fund their operations. GAO found that although FCA's administrative expenditures varied each year between 1996 and 2000, they remained below 1996 levels and stayed within congressionally imposed annual spending limits for each year during 1997 through 2000. Between 1996 and 2000, the agency experienced a decline in administrative spending of around $2 million, or 5.8 percent. Personnel costs were the largest single expense, consistently accounting for more than 80 percent of administrative spending; thus, a 15 percent staff reduction also provided the greatest overall savings. Unlike many government agencies whose operations are funded by taxpayers' money, the federal financial regulators are self-funded agencies that rely primarily on assessments from the entities they regulate. In calculating these assessments, FCA and the other federal financial regulators use separate methodologies for primary and secondary market entities. |
DOE is the largest civilian contracting agency in the federal government; about 90 percent of its annual budget is spent on contracts for carrying out its activities and operating its facilities. In fulfilling their missions, DOE’s program offices are responsible for contracting for and overseeing the execution of the department’s major projects, many of which are first-of-a- kind efforts and thus involve substantial risk and may also be separate line items in DOE’s budget. For example: Environmental Management’s mission is to accelerate risk reduction and cleanup of the environmental legacy of the nation’s nuclear weapons program and government-sponsored nuclear energy research. Environmental Management has used a single sitewide contract that involves several major projects costing billions of dollars for cleaning up some of its former facilities. In addition, Environmental Management has undertaken many large-scale individual projects. For example, the Hanford Tank Waste Treatment and Immobilization Plant project is an important part of the cleanup effort at Hanford, Washington. The project, which was initiated in December 2000, is intended to treat and prepare for disposal 55 million gallons of high-level radioactive waste by July 2011 at an estimated cost of $5.7 billion. NNSA’s mission is to meet national security requirements by, among other things, maintaining and enhancing the safety, reliability, and performance of the U.S. nuclear weapons stockpile, which includes maintaining the capability to design, produce, and test nuclear weapons. To fulfill this mission, NNSA undertakes such projects as refurbishing W-80 nuclear warheads to extend their operational lives. The W-80 refurbishment project was initiated in September 1998 and is expected to be completed in fiscal year 2017 at an estimated cost of about $2.45 billion. The Office of Science’s mission is to deliver the remarkable discoveries and scientific tools that transform our understanding of energy and matter and advance the national, economic, and energy security of the United States. To fulfill this mission, the Office of Science has constructed specialized scientific research facilities, such as the Spallation Neutron Source at the Oak Ridge National Laboratory. This project consists of an accelerator system that delivers short (microsecond) pulses to a target/moderator system where neutrons are produced by a nuclear reactor process called spallation. This project is designed to provide the next-generation spallation neutron source for neutron scattering and related research in broad areas of the physical, chemical, materials, biological, and medical sciences. The Spallation Neutron Source project began in October 1998 and is expected to be completed in June 2006 at an estimated cost of about $1.4 billion. DOE’s principal official responsible for the execution of a major project is the federal project director, who is located at the project site and is supported by project managers. The project director is responsible for overseeing a project’s design, execution, budgeting, and performance. For contracts with award fee provisions, senior DOE program office managers consult with contracting and project officers to assess a contractor’s performance and determine the appropriate award fees. In addition to the contract management problems our prior reports have identified, a recent series of reports by the National Research Council of the National Academies identified weaknesses in DOE’s project management. The council’s 2004 report cited several factors that have contributed to the slow pace of project management improvements and resulted in inconsistent project performance. These factors include the desire of DOE site office personnel and contractors to be independent of oversight from DOE headquarters, insufficient support for training, inadequate numbers of DOE project managers to oversee contractors’ performance, and the absence of a champion for project managers and process improvement who has the authority to ensure both adherence to policies and procedures and the availability of necessary funding and personnel resources. During the past year, DOE has continued to implement contracting and project management reforms. In particular, in December 2003, the Secretary of Energy appointed an Associate Deputy Secretary with responsibility, among other things, for both contract and project management, addressing a key National Research Council concern. DOE also entered into an agreement with the Defense Contract Management Agency, within the Department of Defense, to support the certification of contractors’ project management systems. More recently, DOE is developing an action plan in response to the Civil Engineering Research Foundation’s assessment of departmental project management that recommended that DOE, among other things, develop a core group of highly qualified project directors, require peer reviews for first-of-a-kind and technically complex projects when the projects’ preliminary baselines are approved, and enhance PARS by making the data more timely. Furthermore, to improve its contract award process, DOE revised its Acquisition Guide by adding chapter 16, which lists the various contract types available and discusses their respective advantages and constraints. To address future skill gaps in its procurement organization, DOE established an acquisition career development program and has certified 90 percent of its procurement professionals as attaining mandatory training and experience standards under this program. Within the Office of Environmental Management, a series of contract and project management improvements have occurred consisting of, but not limited to, providing additional training and managing more of the cleanup work as projects. Within the Office of Contract Management, a series of contract award and administration initiatives have been completed. These initiatives include, among other things, strengthening contract competition policies and practices, improving acquisition workforce effectiveness, increasing small business utilization throughout DOE, and strengthening DOE management and fiscal effectiveness. For fiscal year 2005, the Office of Contract Management has multiple initiatives planned, including identifying and implementing follow-on actions related to the DOE management challenge pertaining to contract competition. Because many of DOE’s major projects are first-of-a-kind and thus involve substantial risk, DOE’s contracting decisions can be critical to the successful completion of its major projects. However, DOE could use performance incentives more effectively for controlling costs and schedules for its major projects if the department developed criteria for using different performance incentives and assigned responsibility for reviewing a contract’s project management provisions prior to award. For example, DOE has used contracts that have a technical, schedule, or other performance incentive without an associated cost incentive or cost constraint (other than the annual funding level for the contract). DOE also has used cost-plus-incentive-fee contracts without certifying that contractors’ project management systems generate reliable cost and schedule data for measuring performance and awarding fees. In addition, we found that the contract incentives for most of the 25 major environmental restoration projects substantially differ from the “Gold Chart” performance metrics that Environmental Management uses to assess its performance and report its progress to the Congress. Furthermore, for 11 major projects that are components of the environmental cleanup of a DOE facility, Environmental Management has not directly linked incentive fees to the successful completion of the project, generally because the project is part of the contractor’s larger cleanup responsibility. Finally, while Environmental Management has decided that incentive fee determinations would consider only contractor activities directly related to cleanup work, NNSA has, for at least 1 of its major projects, considered a contractor’s indirect work-related activities in awarding incentive fees. Despite efforts in recent years to improve contract and project management, DOE has not fully developed performance incentive guidance to effectively control costs and maintain schedules. DOE has issued the following guidance, order, and manual that are applicable to the contract award process for major projects and that supplement the FAR and the DOE Acquisition Regulation: In the late 1990s, DOE issued its Acquisition Guide to, among other things, supplement the FAR and the DOE Acquisition Regulation and be a repository of best practices found throughout the department. Chapter 16 of the guide discusses contract types; however, the chapter notes that it was not intended to provide a template for matching a contract type to given contracting situations. While the guide’s index shows that chapter 34 is reserved for guidance to contracting officials related to major projects, DOE has never drafted the chapter, according to the DOE official responsible for maintaining revisions to the Acquisition Guide. In October 2000, DOE issued Order 413.3, “Program and Project Management for the Acquisition of Capital Assets,” to ensure that capital assets, including major projects, would be delivered on schedule, within budget, and fully capable of meeting mission needs. To accomplish these goals, the order states, in part, that DOE officials are to develop an acquisition plan during the acquisition process that includes such elements as contracting options and a contractor incentive process. The order, however, does not elaborate on the possible contracting and performance incentive options whatsoever. In March 2003, DOE issued manual 413.3-1, “Project Management for the Acquisition of Capital Assets,” to improve the implementation of DOE Order 413.3. The manual addresses various activities, including a chapter on contracting that contains no direct reference to major projects. The chapter states that the type of contract and incentives proposed should be based on an overall view of the principal risks to the project and provides a limited discussion of the types of contracts available. For example, it states that fixed-price contracts are not appropriate for research and development efforts or other complex projects where there is a high degree of uncertainty in the execution or DOE requirements. While the chapter mentions that DOE generally uses a cost-plus-award-fee contract for contractors managing and operating DOE sites, it does not address the other available types of contract. Furthermore, DOE has not used its Acquisition Guide to identify best practices, or lessons learned, based on its major project contracting experiences. In our view, given DOE’s long history with major projects, considerable information could be added to this guide detailing those major project contracting approaches that worked and those that did not. Improved guidance could help DOE better control costs and maintain schedules for its major projects. Neither the Office of Contract Management nor the Office of Engineering and Construction Management always reviews the project management provisions of major project contracts prior to award to ensure that the performance incentives are appropriately used. At the heart of this problem is confusion over responsibility. The Director of the Office of Contract Management and the Director of the Office of Engineering and Construction Management each believe that the other office has headquarters responsibility for reviewing the project management provisions of contracts prior to approval. The confusion exists because the chapter in DOE’s Acquisition Guide on the headquarters review of contract and financial assistance actions is silent on the role of the Office of Engineering and Construction Management in the review process. This chapter indicates that packages pertaining to contract actions will be sent to nine different DOE offices for review, none of which is the Office of Engineering and Construction Management. As a consequence, if this office has a role in the contract review process, it has not been clearly defined. According to the Director, Office of Contract Management, the Office of Engineering and Construction Management should be responsible for reviewing the project management provisions in major project contracts because of its responsibility for project management matters. The director told us that his office typically reviews from 60 to 70 pending contract actions each year, and these reviews follow a general approach looking at any matters that might affect timing, delivery, and cost—but no specific, formalized list is followed. According to the Director of the Office of Engineering and Construction Management, his office reviews certain documentation that could affect which company is selected for a contract, but his office has no role in reviewing the actual provisions of the contract. While the Office of Contract Management sends contract proposals to the Office of Engineering and Construction Management for review, the director noted that his office has only one staff person with contracting experience. The director believes the solution to improving the review of major project contracts is for contracting officials within the Office of Contract Management to become more familiar with earned value management, a DOE contracting requirement for integrating and measuring a contractor’s performance. For many of the 33 major projects we reviewed, DOE has used performance incentives that limit its ability to effectively control cost and schedule performance. For example, almost all of DOE’s cost-plus-award- fee contracts for major projects have included a performance incentive without also using an associated cost incentive or cost constraint (other than the annual funding level for the contract). Also, DOE has used cost- plus-incentive-fee contracts without certifying that contractors’ project management systems generate reliable cost and schedule data for measuring performance and awarding fees. We also found that (1) Environmental Management’s contracts included environmental cleanup performance incentives that differed substantially from its new Gold Chart performance metrics; (2) DOE did not always link its fee awards to contractors’ performance on major projects; and (3) DOE’s program offices have treated indirect work-related activities, such as providing timely and accurate reports to DOE, differently in determining the contractors’ incentive award fees. For 15 of the 17 major projects that use a cost-plus-award-fee contract, the contract contained a technical, schedule, or other performance incentive without including an associated cost incentive or cost constraint (other than the annual funding level for the contract). Under such circumstances, the potential exists that a contractor could meet all incentives and overrun baseline costs but still receive full fees. The other 2 major projects used a cost-plus-award-fee contract that included an associated cost incentive or cost constraint for each technical, schedule, or other performance incentive. The FAR, the DOE Acquisition Regulation, and DOE guidance preclude the inclusion of a schedule or other performance incentive without also including a cost incentive or cost constraint. FAR § 16.402-1 states that no incentive contract may provide for other incentives without also providing for a cost incentive or cost constraint. Similarly, DOE Acquisition Regulation § 970.5215-3 provides that requirements incentivized by other than cost incentives must be performed within their specified cost constraint. DOE’s Performance-Based Contracting Guide, dated October 2003, states that (1) cost incentives should be included if other incentives are included because a schedule or other performance incentive may result in the contractor paying little attention to the cost of achieving those incentives unless cost is also a consideration and (2) DOE contracts, in developing incentives and incentive programs, must comply with the incentive contract provisions of the FAR and the DOE Acquisition Regulation. The Director of the Office of Contract Management told us that to implement the FAR requirement to include a cost incentive or cost constraint whenever a noncost incentive is in the contract, each noncost incentive does not necessarily need an associated cost constraint dedicated to that noncost incentive. According to the director, a single cost constraint, which could be equivalent to the project’s annual funding level, would fulfill the FAR requirement. However, DOE contracting officials at Oak Ridge, West Valley, and Savannah River believe that to implement the FAR and DOE Acquisition Regulation requirements in a way that effectively controls costs, a contract with a technical, schedule, or other noncost incentive should also have an associated cost incentive to function as a constraint on the expenditure of funds. One of these officials added that as the noncost incentives become more objective and measurable, the cost constraint should be more clearly defined in relation to each noncost incentive. Similarly, another one of these officials told us that using the annual funding level or the project’s cost baseline as the constraint is too vague and unworkable, and that some funding levels and cost baselines do not track down to the performance incentive level. As a result, neither the funding level nor the cost baseline would indicate whether the performance incentive was accomplished within the cost constraint. These views are consistent with the findings from DOE’s 1997 assessment of performance-based incentives, which found that DOE’s and contractors’ financial systems generally are budget-based and do not segregate and track costs at the performance incentive level. The assessment added that this limits DOE’s ability to establish meaningful cost baselines and to monitor the cost of performance under specific incentivized work efforts in relation to the total cost of the contract. For 13 of the 33 major projects we reviewed, DOE used a cost-plus- incentive-fee contract that provides the contractor with an initially negotiated fee that is subsequently adjusted by a formula based on the relationship of total allowable costs to total target costs. The formula provides, within limits, for fee increases when total allowable costs are less than target costs. In recent years, DOE has made a major effort to move toward the use of cost-plus-incentive-fee contracts. Because a cost-plus-incentive-fee contract provides higher fee awards to the extent that actual costs are lower than anticipated, it depends upon reliable cost estimating at the outset in the form of a target cost and reliable cost reporting later. In July 1997, the Office of Management and Budget (OMB) issued requirements regarding the acceptability of contractors’ project management systems. However, DOE has not certified the reliability of contractors’ project management systems that generate the target cost data for the 13 major projects. As a result, a contractor might receive a high fee payment because its project management system generated an unreliable high initial cost estimate and subsequently reported lower actual costs. A U.S. Army Corps of Engineers’ report, issued in May 2004, concluded that it was not appropriate to use a cost-plus- incentive-fee contract for the Hanford Tank Waste Treatment and Immobilization Plant project, in part because reliable cost data could not be generated in advance. Furthermore, DOE site personnel may not provide adequate surveillance of the contractors’ cost records for these 13 projects. According to DOE’s Performance-Based Contracting Guide, it is inappropriate to use a cost- plus-incentive-fee contract if there is an overreliance on contractor accounting systems and contractor-collected data without significant validation of those data. In such situations, the guide states, any potential cost savings reported might be the result of a poor estimate of the amount of labor or material required, the approach planned, or the associated costs. The Office of Contract Management’s self-assessment of contract administration in 2002 found that most of the DOE field locations visited relied almost exclusively on the contractors’ data because they did not have the staff resources capable of validating cost or technical baselines. The report, however, did not identify the DOE field locations visited, and, according to an Office of Contract Management official, no individual field location reports were prepared. For 16 of the 25 major environmental restoration projects that we reviewed, the contracts’ performance incentives differed substantially from the Gold Chart performance metrics that Environmental Management uses to assess its performance and report its progress to the Congress. Environmental Management developed the Gold Chart performance metrics in October 2002 as a basis for clearly and objectively showing the progress being made in the environmental cleanup program. We found, however, that these Gold Chart metrics were not being used to measure contractors’ performance or award fees. Instead, DOE measures performance and awards fees on the basis of information from the contractors’ project management systems, which DOE has not yet certified as capable of producing reliable information. For 4 projects at the Fernald Closure Site in Ohio, a lower performance fee might have been appropriate if the Gold Chart metric had been used. For fiscal year 2003, DOE awarded the contractor about $7.7 million of the $8 million in available fee, or 97 percent, on the basis of acceptable cost and schedule performance toward closure of the entire site during fiscal year 2003. However, according to the fiscal year 2003 Gold Chart metrics, the goal for the Fernald Closure Project was to accomplish four radioactive facility completions and dispose of 2,568 cubic meters of radioactive waste. According to Environmental Management information, the contractor did not fully complete one of these tasks. Because the contractor accomplished only three of the four radioactive facility completions, Environmental Management might have given a different fee amount if the two Gold Chart metrics had been used to determine award fee. Conversely, a different fee amount might have been warranted for the Solid Waste Stabilization and Disposition project at Hanford, Washington. For fiscal year 2003, DOE awarded the contractor about $2.2 million of about $3 million in available fee, or 73 percent, on the basis of the contractor’s disposal of radioactive waste in accordance with an approved schedule that DOE determined the contractor had met. In contrast, Environmental Management data for fiscal year 2003, using Gold Chart metrics, show that the contractor actually disposed of 3,634 cubic meters of waste as compared with a goal of disposing 2,320 cubic meters of waste, or about 157 percent of the work intended. If the Gold Chart metrics had been used to determine the award fee, the contractor might have received a different fee amount. For the Spent Nuclear Fuels project, at Hanford, Washington, the Gold Chart metric and the contract’s performance incentive were so dissimilar that it was difficult to determine how to gauge the contractor’s performance. For fiscal year 2003, DOE awarded the contractor about $2.8 million of about $3.3 million in available fee, or 85 percent, on the basis of the contractor’s removing 777 metric tons, or 87 percent, of the 890 metric tons that had been planned. However, Environmental Management data for fiscal year 2003, using the Gold Chart metrics, show the contractor removed 805 units, or 94 percent, of the goal’s 855 units. Because the Gold Chart metric and the contract’s performance incentive were so dissimilar, we could not reconcile the information. Environmental Management officials told us that the performance incentives contained in environmental cleanup contracts and the Gold Chart metrics should be aligned. In commenting on the draft report, Environmental Management officials stated that the new Savannah River cleanup contract incorporates Gold Chart metrics. They added, however, that the contract renewals for the Oak Ridge, Fernald, and Rocky Flats facilities do not contain the Gold Chart metrics because each is a cost-plus- incentive-fee contract that awards fee based on the final closure costs and date for the site. It is unclear whether these cost-plus-incentive-fee contracts will more effectively track contractors’ performance because they rely on contractors’ project management systems that DOE has yet to certify. In contrast, the Gold Chart metrics assess the accomplishment of discrete amounts of work that is verifiable. In 1996, we reported that a key factor inhibiting the successful completion of DOE’s major projects was the lack of effective incentives. To the extent that incentives are properly applied, they can help achieve agency goals. On the other hand, if incentives are nonexistent or not effectively applied, a project may not be successfully completed. Sixteen of the 33 major projects we reviewed had no incentive fees directly associated with the successful completion of work. Nine of these 16 projects involve closure work at the Fernald and Rocky Flats sites, where the payment of incentive fees is based on an overall average of the cost and schedule status for all site closure activities, including major projects and other site activities. Environmental Management officials told us that rather than awarding incentive fees specifically for completing any of the 9 major projects, or for other key interim milestones, the Fernald and Rocky Flats contracts award provisional incentive fees for meeting or exceeding overall targets for a fiscal year, provided the contractors successfully achieve site closure on schedule. However, it remains to be seen whether this approach will be effective in completing major projects on time and within cost. For example, although a major project at the Fernald site that we reviewed was experiencing cost growth to the point where it was expected to exceed its cost baseline—the total cost estimate to accomplish the project—DOE considered the overall average of the cost and schedule status for all site activities at Fernald to be acceptable and paid the contractor provisional incentive fees for fiscal year 2003. Similarly, a major project at the Rocky Flats site had overrun its estimated cost by about $42 million through fiscal year 2003. However, this overrun was offset by an underrun of about $46 million in activities such as general counsel work and planning and integration that, according to DOE information, had historically been understaffed. The net effect was that DOE paid the contractor provisional incentive fees because the contractor’s overall cost and schedule status for fiscal year 2003 was considered to be acceptable. In addition to these other contracting problems, we found that DOE program offices treated indirect work-related activities differently in awarding incentive fees. In late 2002, Environmental Management decided that award fee determinations will consider only contractor activities directly related to cleanup work, while excluding such indirect work- related activities as providing timely and accurate reports to DOE, providing support services to the government, and complying with the contract because these activities are basic expectations of any contractor. Environmental Management made this determination after its review of contractors’ authorized fee incentives identified numerous examples of incentive fee payments for indirect work-related activities. The review also found that Environmental Management was paying some contractors additional fees for performing work safely that the review concluded was a basic expectation, and not exceptional performance worthy of additional fee. NNSA has not conducted a review similar to Environmental Management’s assessing what, if any, indirect work-related activities are worthy of incentive payments. The contractor for one NNSA major project received incentive fee payments for providing timely and accurate reports to DOE and other indirect work-related activities during fiscal year 2003. Discrepancies in the treatment of various indirect work-related activities have occurred because DOE’s guidance does not address the appropriateness of including a contractor’s performance of indirect work-related activities in determining incentive fee awards. In commenting on the draft report, Environmental Management expressed concern that it would be virtually impossible to develop meaningful guidance that could be applied universally to DOE’s diverse programs. We disagree. We believe that all DOE programs should use incentive fees to reward contractors for achieving work-related activities, as opposed to such indirect activities as providing the DOE programs with timely reports. Because most of DOE’s operations are carried out through contracts, contract administration is a significant part of DOE’s work. DOE has relied on unvalidated contractor data to monitor contractors’ progress in executing major projects and awarding fees for performance. This reliance on unvalidated data limits the department’s ability to ensure it gets what it is paying for. Specifically, DOE’s self-assessments of its contract administration in 1997 and 2002 both found that field personnel overly relied on contractor accounting systems and contractor-collected project data without significant validation of these data. However, unlike the 1997 self-assessment, the one in 2002 made no recommendation to fix this problem, and no subsequent self-assessment has been initiated to determine if the problem has continued. DOE has begun to certify the reliability of contractors’ project management systems that generate the performance data used to monitor contractors’ progress; however, the department has no time table for the completion of this certification program. In addition, DOE has not required its contracting officers and contracting officer representatives to receive training in earned value management—a systematic approach for integrating and measuring cost, schedule, and technical (scope) accomplishments on a project or task— even though these officials are required to determine whether contractors’ project management systems meet the private industry’s earned value management standard. Self-assessment is an important tool for evaluating organizational effectiveness. By taking a comprehensive look at itself, an organization can identify weaknesses and plot a course of corrective action. DOE performed comprehensive self-assessments of its contract administration practices in 1997, 1999, and 2002. In 1997, DOE assessed 20 contracts to ensure that financial incentives contained in those contracts were rational, linked to well-defined performance objectives and measures, and properly administered. The self- assessment reported both positive and negative findings. For example, it found that the use of performance-based objectives generally had been effective in directing contractors’ management attention to desired performance outcomes. However, it also found that field personnel overly relied on contractor accounting systems and contractor-collected data without significant validation of these data, and that DOE’s approval of fees earned by the contractors relied upon contractor-generated documents. To correct this deficiency, the self-assessment recommended (1) that the cognizant DOE heads of contracting at each field location, as part of their overall contract administration plan, identify the mechanisms, responsibilities, and authorities for ensuring that contractor performance against established objectives is appropriately monitored and (2) that performance achievements are verified. In 1999, DOE’s follow-up assessment of the effectiveness of the actions taken in response to the 1997 self-assessment found that the recommendation that contractor performance be monitored and achievements verified had been implemented. Specifically, field offices reported that their plans for administering contracts had been appropriately modified and instituted. In addition, the follow-up assessment stated that (1) early results indicated a substantial improvement in the way incentives were being managed from DOE headquarters and administered at DOE field contracting offices and (2) anecdotal evidence suggested that contractor performance had improved. In 2002, the Contract Administration Division again performed a self- assessment that examined, in part, how contract administration planning and execution was conducted at various DOE field locations. The findings and conclusions of this review were somewhat inconsistent with those of the 1999 follow-up assessment. The 2002 review, like the 1997 assessment, determined that few sites had the resources capable of validating contractor cost or technical information and most sites must rely almost exclusively on the contractor’s data. The review noted, in one instance, that financial data provided by the contractor were generally accepted by DOE, not on the basis of reasonableness and allowability, but on the basis of the contractor’s “acceptable” self-assessment of the procedures used to collect those data. However, unlike the 1997 assessment, the 2002 review contained no specific recommendation to correct this overreliance on contractor data. According to the Director of DOE’s Contract Administration Division, because of funding constraints and other factors, no broad self-assessment of contractor administration has been done since 2002. The director added that DOE now conducts individual site assessments as necessary rather than conducting more comprehensive assessments. According to information provided to us in April 2004, the last individual site assessment was made in August 2003 and documented in December 2003. This site assessment identified problems similar to those reported in the 2002 self- assessment. Specifically, the site assessment noted that, with respect to one contract reviewed, there was no evidence of effective cost controls and/or contract management. The site assessment contained no formal recommendation to fix this problem. On the other hand, the site assessment contained a recommendation to address the high rate of expenditure on this contract over the remaining 2-year-option period. The assessment recommended that the DOE site office review the scope and cost of its current task orders for prioritization and inclusion in the remaining option term. In August 2003, DOE began to certify the reliability of contractors’ project management systems that generate the performance data used to monitor contractors’ progress. However, as of December 2004, the department has assessed and certified project management systems for only 2 of the 33 major projects we reviewed and does not have a time table for completing this certification program. In commenting on the draft report, DOE noted that both Environmental Management and the Office of Engineering and Construction Management have been validating contractors’ cost and schedule performance baselines for several years. In our view, DOE validation of contractor baselines will not fully address the problems that have been identified. Validating baselines is just the first step in performing adequate contractor oversight. After baselines have been validated, DOE must not overly rely on contractor accounting systems in reporting costs and on contractor- collected project data in awarding fees. That is the message from two DOE self-assessments of performance-based contracting. With respect to DOE’s experience in baseline validation, the Civil Engineering Research Foundation’s July 2004 report for the Office of Engineering and Construction Management found that some improvements in baseline validation were needed. This report noted that many of the DOE projects it reviewed were formulated with inadequate baseline estimates. In addition, the report stated that periodic baseline changes were occurring that masked the true status of certain projects. The report recommended that DOE develop guidelines that appropriately control the rebaselining of projects. DOE further stated that the promulgation of contract management planning guidance and the requirement for a contract management plan addressed many of the issues that the 2002 self-assessment identified. However, in our view, until a subsequent assessment is done, it remains unclear whether this DOE action has adequately resolved the issues identified in the 2002 self-assessment. For fiscal year 2005, DOE is planning to examine the contract management plans and contractors’ purchasing systems. During the early 1990s, OMB issued several reports on civilian agencies’ contract administration practices that found that agencies frequently experienced cost overruns and delays in receiving goods and services because their contracting officials allocated more time to awarding contracts than to administering existing ones. In response, OMB revised its Circular A-11 to require that federal agencies assess and certify contractors’ project management systems for proper use of earned value management principles. OMB also identified several other deficiencies, including a lack of proper training for agency officials performing contract oversight. According to administrators at the National Aeronautics and Space Administration, earned value management training is essential for their contracting officers to adequately assess whether a contractor’s project management system complies with the private industry’s standard. We found that, with the exception of NNSA, DOE has not required its contracting officers or contracting officer representatives to receive earned value management training, even though they are responsible for determining whether the contractor’s project management system complies with the private industry’s earned value management standard after the contract is awarded. The following three DOE documents contain the contracting officer’s responsibilities, the standards against which those responsibilities are to be discharged, and the training requirements for contracting officers: Chapter 1 of DOE’s Reference Book for Contract Administrators, issued in 2000 and in effect through October 2004, outlines the contracting officers’ many responsibilities, including a review of the adequacy of the contractor’s project management system. The reference book states that the system’s adequacy must be confirmed by the contracting officer with the support of other DOE headquarters and field office personnel, as appropriate. The reference book also indicates that corrective action plans resulting from DOE reviews of contractor project management systems are to be tracked until the DOE contracting officer confirms that all open issues are closed. DOE Order 413.3, “Program and Project Management for the Acquisition of Capital Assets,” also issued in 2000, specifies that contractors’ project management systems must comply with the American National Standards Institute’s standard on earned value management. The order states that this requirement applies only to systems involved in controlling the performance of projects costing more than $20 million in total. The order also requires that contractors’ systems provide cost and schedule performance, milestone status, and financial status to DOE on a monthly basis. DOE Order 361.1A, “Acquisition Career Development Program,” issued in April 2004, outlines the training and certification requirements for DOE contracting officers and contracting officer representatives. The order identifies a training curriculum for these officers by functional area—including, among others, procurement contracts; interagency agreements and sales contracts; grants and cooperative agreements; loans and loan guarantees; and the government purchase card. The order, however, does not require either the contracting officer or the contracting officer representative to receive earned value management training. The Director of the Contract Administration Division corroborated our assessment of DOE’s order for acquisition career development. The director noted that the only reference to earned value management training in DOE Order 361.1A requires that DOE project directors, not contracting officers, complete a course on earned value management systems. Without this training, however, it is unclear how DOE contracting officers and contracting officer representatives can fulfill their responsibilities and properly assess the adequacy of the project management systems of departmental contractors. In providing us with exit conference comments, DOE Office of Contract Management officials acknowledged that contracting officers do have a responsibility in the area of earned value management and will be receiving training on that subject in the future. Subsequently, in December 2004, DOE provided contracting professionals at DOE headquarters with a 1-hour course on earned value management. DOE said that this training session, which was video recorded, is being required nationwide for all DOE contracting officials. As opposed to this 1-hour course, we noted that NNSA requires its contracting officials to participate in a 48-hour course on the fundamentals of earned value management. The reliability of the project cost and schedule data that PARS provides to senior DOE managers is limited by problems with the data’s accuracy, completeness, and timeliness. In general, the accuracy of PARS report data is uncertain because DOE (1) has assessed the reliability of contractors’ project management systems for only 2 of the 33 major projects we reviewed, (2) generally measures projects’ cost and schedule performance in PARS against the current DOE-approved cost and schedule baselines without also tracking performance against the original targets, and (3) has not provided most of its major project directors with the training needed to ensure contractors are generating accurate performance data. PARS report data are not complete because DOE program offices have not submitted performance data to PARS for 3 major projects, as well as at least 2 smaller projects, and PARS reports do not provide each project’s estimated cost at completion or other helpful, forward-looking data. In addition, the Office of Engineering and Construction Management stated that the June 2004 PARS report’s performance data for 6 major projects and 5 smaller projects were significantly out of date, primarily because contractors did not provide updated project performance information. Senior managers have used PARS data to take actions that averted cost increases for certain projects that were experiencing cost or schedule challenges. Without reliable data, however, PARS has not provided senior managers with information about cost increases and schedule slippages for many projects, and the status of many other projects is uncertain. Three factors impair the accuracy of cost and schedule data reported in PARS. First, DOE officials told us they have little assurance that the cost and schedule data for most projects in PARS are accurate because DOE has not assessed the reliability of contractors’ project management systems that generate such data for data reliability, particularly those systems believed to be using incorrect methods. Second, for almost all projects, PARS reports compare cost and schedule performance against DOE’s current baselines, without identifying the extent of cost or schedule slippages that previously occurred. Third, most DOE project directors lack the necessary training to evaluate and verify the accuracy of the performance data that contractors generate, according to DOE officials. OMB Circular A-11 and DOE Order 413.3 require that DOE assess and certify contractors’ project management systems for proper use of earned value management principles in generating cost and schedule performance data before the department approves a project’s cost and schedule baseline at its Critical Decision 2 milestone. Earned value management, when used correctly, produces data that reflect a contractor’s progress toward completing a project within cost and schedule targets. In essence, earned value management measures the value of work completed against the cost and schedule of work planned, as opposed to comparing actual with planned expenditures. To illustrate, assume a contract calls for 4 miles of railroad track to be laid in 4 weeks at a cost of $4 million. After 3 weeks of work, assume $2 million has been spent. By analyzing planned versus actual expenditures, it appears the project is underrunning the estimated costs. However, an earned value analysis reveals that the project is in trouble because even though only $2 million has been spent, only 1 mile of track has been laid; thus, the contract is only 25 percent complete. On the basis of the value of work done, the project will cost $8 million ($2 million to complete each mile of track), and the 4 miles of track will take a total of 12 weeks (3 weeks for each mile of track) to complete instead of the originally estimated 4 weeks. To ensure correct application of earned value management principles, contractors must develop budgets and schedules based on measurable components of a project, which include defined start points, end points, and scopes of work. In addition, contractors must calculate the value of work performed against the budgets and schedules for the measurable project components. Experts in earned value management told us that without defined start and end points and other measurable project components, project performance data give little insight as to whether cost and schedule performance are on track, and the data might mask more serious problems. DOE’s Office of Engineering and Construction Management and the Defense Contract Management Agency assess whether a given contractor’s project management system properly uses earned value management principles by examining whether the contractor’s system complies with the industry standards and verifying that the contractor is using the system to manage the project. Once a contractor has fully addressed the concerns identified by the assessment, DOE is to certify the project management system, attesting that project performance data—data that convey progress toward the approved cost and schedule targets—are generated reliably. Assessment and certification of contractors’ earned value management systems are critical components of DOE’s management of its performance- based contracting, according to DOE earned value management training documents. While only three systems have been assessed since August 2003, Office of Engineering and Construction Management officials told us that they and the Defense Contract Management Agency, working together, could assess the project management systems for about 10 contractors in a given year now that they are becoming more familiar with the process. In August 2003, the Office of Engineering and Construction Management and the Defense Contract Management Agency began the process of assessing contractors’ project management systems as a basis for certifying that they properly use earned value management principles. In September 2004, DOE certified Sandia National Laboratories’ project management system for 1 major project, the Microsystems and Engineering Sciences Applications project, and 6 smaller projects. DOE also plans to certify Oak Ridge National Laboratory’s project management system for another major project, the Spallation Neutron Source, once minor deficiencies are corrected. Overall, however, DOE has assessed project management systems for only 2 of the 33 major projects we reviewed—and 8 of the 73 projects in PARS—that have passed Critical Decision 2 with DOE-approved cost and schedule baselines. (The remaining 65 projects in PARS whose systems have not been assessed have baseline costs of nearly $75 billion.) According to an Office of Engineering and Construction Management official, the first three contractors’ systems were selected for assessment on the basis of visibility, significance, and criticality to the Department’s success, but also because cognizant DOE officials were confident that the contractors’ project management systems would meet certification criteria. The National Research Council’s 2004 report on DOE’s project management found that the quality of earned value management across the department’s projects was inconsistent and stated that senior DOE managers do not know whether the reported data on cost and schedule performance are accurate unless contractors’ systems are assessed and certified. Because DOE has only recently begun to assess contractors’ project management systems that feed data into PARS, DOE officials acknowledged to us that they lack assurance regarding the accuracy of PARS performance data, adding that they believe some of the project management systems not yet assessed have important deficiencies. For example, a DOE expert in earned value management noted that contractors for most Environmental Management projects—about half of the projects in PARS that have passed Critical Decision 2—have not properly implemented earned value management principles because, among other things, many of the projects’ components lack defined start and end points. For example, the earned value management expert believes, on the basis of his assessment of work breakdown structures and other project components, that the contractor’s project management system for the $10-billion Yucca Mountain Nuclear Waste Repository project does not properly use earned value management principles and generates performance data that cannot be regarded as accurate. Consequently, senior DOE managers have no assurance that cost and schedule targets will be met, even if the data suggest they will. Similarly, for several major projects we examined, the contractors’ project management systems do not seem to properly implement earned value management principles to measure cost and schedule performance. For example, the $2-billion East Tennessee Technology Park project at Oak Ridge lacks measurable project components. In some instances, work is categorized into activities such as “general operations” and “contractor operations” that have no apparent defined start and end points. According to the expert in earned value management, the categories of work for this project make it difficult to accurately measure project performance because there is no clear activity or time frame against which to measure costs incurred or time spent. Instead, PARS data for this project seem to measure only the project’s expenditures, which can conceal information on the project’s cost and schedule status and progress toward completion. In addition, the $5.7-billion Tank Waste Treatment and Immobilization Plant at Hanford, Washington, lacks discrete, measurable project components because work is categorized into activities such as “providing technology” and “providing infrastructure” that lack defined start and end points. While we recognize that it is appropriate, according to industry standards, to categorize a small amount of work in this fashion, DOE project management officials said the particular categories of work in these instances reflected a poor comprehension of earned value management and limited their confidence in the assessment of project performance. Two Office of Engineering and Construction Management officials acknowledged that the accuracy of data for these projects is uncertain because DOE has not assessed whether the contractors’ project management systems properly applied earned value management principles. One of these officials suggested that the contractors’ project management systems for such projects should be assessed as soon as possible to correct deficiencies and improve the reliability of project performance data provided to senior managers to oversee progress toward cost and schedule targets. The Director of the Office of Engineering and Construction Management agreed that DOE should develop a schedule that would give priority to assessing these and other high-risk and high-cost systems. As of January 2005, a schedule had not been developed, but the director told us that he was in the process of doing so. The accuracy of the PARS report data is further impaired because PARS reports generally do not show total cost overruns and schedule slippages, even though DOE requires each project team to estimate life-cycle costs and assess project performance against established cost and schedule baselines. Instead, a project’s DOE project director updates the cost and schedule baselines in PARS when DOE approves a contract modification. As a result, PARS reports show relatively small variances between a project’s actual performance and its approved baselines, so that many of the projects we reviewed appear not to have experienced problems when, in fact, they did. For almost all projects, PARS reports do not provide data that would enable senior DOE managers to assess (1) a contractor’s performance against the project’s original DOE-approved baselines to identify total cost overruns and schedule slippages or (2) the effect of any DOE initiatives to control a project’s costs. The Civil Engineering Research Foundation’s July 2004 report similarly found that PARS cost and schedule data often do not convey the actual status of projects since their inception because of periodic revisions of cost or schedule baselines. Furthermore, for most Environmental Management projects, PARS measures project performance from arbitrary dates, such as the beginning of the fiscal year, which do not necessarily correspond to progress toward DOE-approved targets. The following examples illustrate how PARS has masked problems with projects by giving an incomplete picture of project costs or project performance: The January 2004 PARS report showed that the $1.6-billion Spent Nuclear Fuels Stabilization and Disposition project at Hanford, Washington, was on track to meet cost and schedule performance targets. However, by April, total costs for the project increased by nearly $150 million. DOE officials acknowledged that because the January 2004 PARS report to senior DOE managers measured only project performance from the beginning of the fiscal year, instead of against the DOE-approved baselines, the PARS report concealed longer term problems that threatened the project’s completion within costs. In October 2002, the Tritium Extraction Facility at Savannah River, South Carolina, had an approved total cost of about $400 million. Costs for the project increased more than $100 million by September 2003, and subsequent PARS reports showed that costs were on track to meet cost targets, despite the 25 percent increase in the project’s costs. In June 2004, Environmental Management restructured the PARS reporting for 4 projects at Oak Ridge, Tennessee, by combining their respective costs and schedules with those of other Oak Ridge projects. As a result, Environmental Management stopped reporting project performance data for each project, masking the fact that 2 of them, totaling about $300 million, were significantly behind schedule. Two Office of Engineering and Construction Management officials believe the projects should be reported separately because combining projects’ respective cost and schedule data can inhibit the correct use of earned value management. The April 2004 PARS report showed that the total cost of the Soil and Water Remediation project at Ashtabula, Ohio, would be $45 million, although the performance data indicated the project would not likely meet its baselines. However, this amount does not include about $109 million in expenditures on this project by October 1, 2003. Environmental Management reports this project’s total costs to be about $157 million—more than three times the amount reported in PARS. PARS reports that total project costs for the Nuclear Facility Deactivation and Decommissioning project at Columbus, Ohio, will be about $31.5 million. However, this amount does not include about $106 million in expenditures prior to 2004. Environmental Management estimates that this project’s total cost will exceed $163 million—more than five times the amount reported in PARS. The June 2004 PARS report showed that 90 percent of the 63 projects with approved baselines were expected to meet their cost and schedule baselines. However, this percentage may reflect project managers’ efforts to keep the projects’ baselines up to date rather than improvements in project management performance because PARS generally measures projects’ performance against the most current DOE-approved baselines. For example, as shown in table 1, the October 2002 PARS report’s assessment of 2 major projects was red because both projects were expected to breach their cost/schedule performance baselines. However, the September 2003 PARS report’s assessment of these major projects was green because total project costs were within the revised baseline that DOE had subsequently approved. The September 2003 PARS report did not indicate the extent to which each project’s total costs had exceeded the costs that DOE approved at Critical Decision 2 on the basis of an approved conceptual design report and acquisition strategy. In addition to these projects, the 90 percent figure includes many Environmental Management projects, whose performance is measured over time frames that do not necessarily reflect performance against DOE- approved baselines. Further, the 90 percent figure does not reflect the 4 Oak Ridge projects whose performance data showed imminent performance problems before being combined with the performance data of other projects at the site. DOE officials told us that the monthly PARS reports are the primary tool for communicating project performance information to senior management. However, for many projects—particularly those overseen by Environmental Management—PARS does not report projects’ life-cycle costs or performance against original baselines, even though DOE requires each project team to estimate life-cycle costs and assess project performance against established cost baselines and schedule milestones. Office of Engineering and Construction Management officials acknowledged that reporting life-cycle costs and project performance against original cost and schedule baselines in PARS would make cost or schedule challenges easier to identify, and Environmental Management officials told us they plan to report life-cycle costs and project performance against original baselines in PARS reports beginning by December 2004. In addition to Environmental Management’s plans for PARS reporting, the Office of Engineering and Construction Management intends to make several upgrades to the PARS database, such as making the process for entering monthly data more efficient and easier for users to understand and ensuring that the correct data are being entered. Office of Engineering and Construction Management officials reported that they are in the process of implementing these improvements. However, these upgrades do not address the limitations to reporting accurate data that we identified. Furthermore, these improvements do not address limitations in the reliability of data stemming from contractor’s project management systems that have not been assessed or data that have not been reviewed. Project directors are DOE’s focal point for assessing the contractors’ cost and schedule performance data that feed into PARS. However, most of DOE’s project directors have not been certified in earned value management, further reducing assurances that PARS data are accurate. Because DOE believes that it is critical for project directors to understand earned value management, the department informally designates its project directors as “acting directors” if they have not completed the project manager career development program, which includes training in earned value management. Office of Engineering and Construction Management officials told us that while some acting project directors are proficient in earned value management and capable of evaluating the reliability of contractor-generated data, other acting project directors are not. DOE recently implemented the project management career development program through which project directors are being trained in, among other things, earned value management. However, DOE had trained only about 25 percent of them through this program as of July 2004, with plans to train the remaining 75 percent by May 2006. A DOE official told us that the appropriate level of earned value management training for acting project directors depends on their experience in using earned value management. While DOE aims to assess project directors’ capabilities in earned value management to ensure that they are competent, validating the adequacy of prior earned value management experience for acting project directors has been time consuming. The lack of trained projects directors reviewing the accuracy of a project’s performance data may, in some cases, adversely affect the ability of senior DOE managers to properly assess the status of major projects. In addition to reporting data of questionable accuracy, PARS provides incomplete data, therefore senior DOE managers may not be aware of the need to implement corrective actions to prevent cost overruns or schedule slippages. We identified the following 5 projects—3 major projects to refurbish nuclear weapons and 2 projects costing more than $100 million each—that are not in the PARS database, despite DOE’s requirement that projects costing more than $5 million provide monthly reporting: W80 Life Extension Program. NNSA recently increased the total cost of this program, designed to extend the service life of the W80 nuclear warhead by replacing components, from $1.3 billion to about $2.45 billion. W76 Trident Missile Life Extension Program. NNSA expects this project, designed to extend the service life of the W76 nuclear warhead by replacing components, to cost about $680 million over the next 4 years. B61 Alteration 357 Life Extension Program. NNSA expects this project, designed to extend the service life of the B61 bomb, to cost nearly $500 million. Our July 2003 report recommended that DOE improve its oversight of the life extension program’s cost and schedule status. Purple and BlueGene/L Supercomputers under the Advanced Simulation and Computing Program. NNSA expects this project, to cost about $290 million and be completed in 2005. Enterprise Project. NNSA increased the total cost of this project, which will replace the accounting and management systems at Los Alamos National Laboratory, from about $70 million when it was initiated in 2001 to nearly $160 million. The National Research Council’s 2004 report found that DOE has not acted in a timely fashion to include all projects costing more than $5 million in PARS. Office of Engineering and Construction Management officials told us DOE is still in the process of applying project management principles to many of the department’s operational activities. While DOE’s program offices are responsible for converting these activities to projects, many of the program office personnel responsible for applying project management principles do not have the necessary training, according to an Office of Engineering and Construction Management official. While project management training is available, DOE has required only project directors and other senior-level employees to take this training. An Office of Engineering and Construction Management official told us this training would help expedite the application of project management principles to DOE’s operational activities. In addition, for many projects included in the PARS database, PARS reports do not provide important performance information that senior DOE managers need to assess the projects’ status. In some cases, project performance data are not reported because the project is incorrectly listed as being in the design phase when, in fact, it has passed Critical Decision 2. For example, contractors have spent almost half of the approved funds for 2 projects at the Idaho National Engineering and Environmental Laboratory projected to cost $4.3 billion without reporting performance data in PARS. The PARS reports show that these projects are still in the design phase and, therefore, are not subject to reporting performance data, but a DOE official acknowledged that both projects have, in fact, passed Critical Decision 3 and other subsequent milestones. As a result, senior DOE managers cannot rely on PARS for accurate and current performance information for these projects, nor can they rely on PARS to determine whether these projects require corrective actions. For these and other projects, PARS also lacks forward-looking data, such as scheduled work to be performed, the projects’ upcoming milestones, and the projects’ estimated cost at completion. Without such data, PARS cannot provide information on projects’ cost or schedule challenges and DOE management does not have a basis for projecting progress or identifying trends. While not in PARS, this information is available from acting project directors. For example, although early cost savings for the Microsystems and Engineering Sciences Applications project at Sandia National Laboratories led to favorable performance data, DOE’s project director identified supply imbalances in the steel market that would increase the estimated construction costs. Using this information, the project director revised the project’s estimated total cost. Currently, PARS reports to senior DOE managers lack such forward-looking data that could alert them to future cost or schedule challenges. The National Research Council’s 2004 report stated that PARS reports should display forward- looking data to notify senior managers of upcoming milestones. In addition, several acting project directors told us that forward-looking data, such as data on estimated costs at completion, should be included in PARS to identify project performance challenges for senior DOE managers. To further illustrate this need, the total costs of some DOE projects are projected to increase dramatically in the future, despite PARS reports showing that they are expected to be completed on time and within budget. For example, PARS report data show that the Hanford’s Tank Waste Treatment and Immobilization Plant is projected to meet the DOE- approved baseline of $5.78 billion. However, PARS does not show that DOE approved a $1.4-billion increase above the project’s original contract estimate of $4.35 billion in April 2003, nor does it show that the U.S. Army Corps of Engineers, in a May 2004 report, stated that project costs would probably exceed the $5.78-billion cost baseline by $720 million. Even though the DOE project management teams knew of cost and schedule performance problems for the Tank Waste Treatment and Immobilization Plant project, PARS reports have shown that this project was on track for meeting cost and schedule targets. An Office of Engineering and Construction Management official told us that PARS monthly reports do not include forward-looking data and trend data to minimize the amount of time necessary for senior managers’ review. As a result, PARS did not provide senior DOE managers for this and other projects with important information to analyze potential future challenges. Forward-looking performance information, such as scheduled work to be performed and estimated cost at completion, would better enable senior managers to address project management challenges and minimize cost overruns or schedule slippages. Further compounding reliability concerns, we identified problems with the timeliness of PARS data that may limit the ability of senior DOE managers to effectively identify and apply corrective actions. Specifically, we found that cost and schedule performance data were significantly out of date at some time during our review for 8 of the 33 major projects we reviewed and 20 smaller projects in PARS that had passed Critical Decision 2. In these instances, data were out of date because DOE has not effectively enforced requirements that contractors produce updated monthly cost and schedule performance data, and that project directors ensure current performance data are reported into PARS. For some projects, the lack of up-to-date data masked problems that resulted in cost overruns and schedule slippages. For instance: The September 2003 PARS report showed that the Spent Nuclear Fuels project at Hanford, Washington, was on track to meet its DOE-approved total project cost of about $1.6 billion and its schedule completion date of 2007; however, these data were 3 months out of date. Subsequently, the April 2004 PARS report (1) showed that total project costs had exceeded the project’s cost baseline by nearly $150 million and (2) indicated that the project would exceed this revised total cost and the scheduled completion date would slip. In June 2004, the contractor requested additional funding from DOE because both cost and schedule performance continued to worsen. The September 2003 PARS report showed that the K25/27 Buildings Deactivation and Decommissioning Removal project at Oak Ridge, Tennessee, was on track to meet its DOE-approved total project cost of about $265 million and its schedule completion date of 2008. However, the contractor did not update the project’s performance data until April 2004, when the PARS report showed the project would still meet its cost baseline. Environmental Management officials told us that although they knew for several months that the K25/27 project’s total cost would exceed its baseline, the PARS cost data were not updated because the project was being combined with 5 other Oak Ridge projects. The total cost of the K25/27 project could exceed $400 million—more than 50 percent above the DOE-approved total project cost. In June 2004, the Soil and Water Remediation project at Pantex, Texas, had a DOE-approved total project cost of about $175 million, but the Office of Engineering and Construction Management could not assess the project’s performance because data were not provided. Subsequently, the September 2004 PARS report showed that the project was at risk of exceeding its DOE-approved schedule target. In addition to these timeliness problems, the monthly data in PARS reports typically lag a project’s actual performance by 2 to 3 months because of the time contractors need to generate the data and the time DOE project managers need to review and incorporate the summary data into the PARS database. The 2004 National Research Council report stated that the lack of timely data prevents senior managers from using PARS to assess the performance of projects in real time. Similarly, Department of Defense officials familiar with project management have said that using such data to assess project performance is like “overseeing by looking through a rear view mirror” because performance problems have usually gotten worse by the time departmental managers become aware of them. We found that the Department of Defense requires all of its newer contracts to use electronic data interchange to provide more timely information to department program managers. In addition, some acting project directors told us that electronically linking PARS to contractors’ project management systems would improve timeliness because manually entering cost and schedule data into the PARS database had often resulted in delays of 2 to 3 months to complete the process. In some instances, data were entered incorrectly, although in each instance the data were corrected before being reported to senior managers. While the DOE project directors we contacted uniformly agree that manually entered data are correctly entered by the time PARS monthly reports are delivered to senior managers, electronically linking PARS to contractor systems could eliminate the potential for such errors and enhance senior managers’ ability to address potential cost or schedule challenges in real time. Alternatively, DOE might include a provision requiring timely monthly reporting in all applicable contracts. When data can be relied upon, DOE senior managers have taken corrective actions to address cost or schedule challenges while minimizing costs to the government. For example, NNSA terminated the Sandia Underground Reactor Facility project, which was intended to reduce the future operational costs associated with securing a reactor, when management learned that cost estimates had increased by more than 150 percent between project conception and the final design phase. The project was terminated before costs were incurred. In another instance, Environmental Management approved a contractor’s recovery plan to complete the Melton Valley Closure project at Oak Ridge, Tennessee, whose schedule performance had slipped dramatically and required corrective actions. The contractor lengthened work hours and modified its approach for constructing a subproject. As a result, the recovery plan showed that the scope of work could be accomplished without increasing project schedule. Since 1990, we have designated DOE’s contract management, which we have broadly defined to include contract administration and project management, as a high-risk area for fraud, waste, abuse, and mismanagement. Although DOE has implemented important contract administration and project management reforms, problems persist and many major projects continue to experience millions of dollars in cost overruns and years of delays. Two deficiencies—the lack of contracting criteria for major projects and the lack of reviews of the project management terms in major project contracts—have resulted in questionable DOE contracting decisions that limit its ability to effectively control cost and schedule performance. For example, many of DOE’s contracts for major projects have used performance incentives that have used a technical, schedule, or performance incentive without an associated cost incentive or cost constraint, thereby giving contractors an incentive to pay limited attention to costs when working toward meeting technical or performance levels in order to earn a higher award fee. Furthermore, for major projects, DOE has given insufficient emphasis to the oversight of contract administration, which begins after contracts are awarded and helps ensure that the department gets what it pays for. DOE needs to give increased emphasis to reviewing how it administers contracts; correcting previously identified weaknesses, such as overreliance on contractor data; and providing training to its contracting officers. Without such actions, the department is totally dependent on its contractors’ self-reports on their performance. Because of problems with the accuracy, completeness, and timeliness of the PARS data, senior DOE managers lack key project performance information for assessing the progress of many major projects and making decisions about corrective actions. In particular, because DOE has assessed the reliability of only three contractors’ project management systems that feed data into PARS, senior managers cannot be certain that the contractor systems are producing reliable data. Such data are critical to good project management and affect DOE’s assessment of contractor performance. Absent reliable data from the contractor systems, DOE lacks assurance that the fees it awards for a contractor’s project management actions are well deserved. To ensure the use of effective performance incentives for major projects, we recommend that the Secretary of Energy direct the Associate Deputy Secretary with responsibility for contract and project management to take the following two actions: develop a major projects chapter in the DOE Acquisition Guide that specifies a systematic contracting approach, including, for example, criteria for (1) ensuring that incentive fee awards are based on reliable performance data, (2) using appropriate cost and schedule incentives, (3) better linking fee awards to performance for major projects that are part of larger site cleanups, and (4) determining which indirect work- related activities should and should not be considered in awarding contractors’ fees, and clarify roles and responsibilities for reviewing contracts prior to award to ensure project management consistency. To strengthen departmental oversight of contract administration for major projects, we recommend that the Secretary of Energy direct the Associate Deputy Secretary with responsibility for contract and project management to take the following three actions: conduct comprehensive self-assessments of contract administration at least every 3 years, identify corrective actions to reduce the overreliance on unvalidated contractor data in awarding contract fees that was identified in previous self-assessments, and train contracting officials in earned value management. To improve the reliability and usefulness of project performance data in PARS, we recommend that the Secretary of Energy direct the appropriate managers to take the following seven actions: develop a schedule for assessing the reliability of the contractors’ project management systems, giving priority to major projects and those projects with systems believed to be using incorrect methods to generate PARS data; revise DOE manual 413.3-1 to provide guidance that enhances the accurate reporting of total cost and project performance data into PARS, such as the reporting of life-of-project cost and schedule variances; expedite training for major project directors in earned value ensure that program office officials receive currently available project management training so that they can better identify the elements of a project, and apply the project management concepts necessary for them to report performance data in PARS; incorporate forward-looking trend data into PARS reports so that senior managers can better identify negative trends and potentially take corrective action; explore options for ensuring that contractors provide cost and schedule performance data to PARS on a monthly basis, such as making monthly submissions a requirement in all applicable contracts; and explore options for providing senior DOE managers with more timely project performance data by, for example, electronically linking contractors’ project management systems to PARS. We provided DOE with a draft of this report for its review and comment. In written comments, DOE generally concurred with our recommendations but provided clarifying comments on four of the recommendations. (See app. III.) First, concerning our recommendation that DOE develop a major projects chapter in its Acquisition Guide, DOE stated that the department has already developed an extensive body of material that constitutes a “systematic contracting approach” for the acquisition and management of departmental major projects, but added that the department will develop an overview and summary of this information in a major projects chapter in its Acquisition Guide. We believe this chapter will further enhance DOE’s guidance, particularly if the department provides criteria that address each of the four issues identified in our first recommendation. Second, concerning our recommendation on DOE’s comprehensive assessment of contract administration, DOE stated that the department did not stop conducting comprehensive assessments. In response, we have revised our recommendation to state that DOE should conduct these assessments at least every 3 years. Third, concerning our recommendation that DOE identify corrective actions for reducing overreliance on unvalidated contractor data, DOE stated that the department had already taken positive steps to reduce its overreliance on contractor data by, for example, reviewing and validating such data and project baselines. DOE added that the department would continue to identify any corrective actions necessary to reduce overreliance on contractors’ data in awarding fees. While we agree that validating project baselines is an important first step, we believe that DOE’s efforts to ensure that contractor performance data are reliable by certifying contractors’ project management systems is vital. Fourth, concerning our recommendation that DOE link PARS and contractors’ project management systems, DOE stated that our recommendation is too narrowly focused, particularly in light of DOE’s efforts to implement a departmentwide enterprise architecture solution. We agree, and we have revised our recommendation accordingly. In addition, DOE stated that it believes the draft report contained a number of inaccuracies and provided detailed comments. We have revised the report, where appropriate, in response to these comments. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Energy and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on this report, please contact me at (202) 512-3841. Key contributors to this report were Richard Cheston, Robert Baney, Nathan Anderson, Bernice Dawson, Cynthia Norris, Judy Pagano, and Doreen Feldman. Cost incentive or cost constraint for individual project in contract? Schedule performance incentive for individual project in contract? Fee available for individual project? Non-Nuclear Facility Decontamination and Decommissioning (OH-FN-0050) Solid Waste Stabilization & Disposition (OH-FN-0013) Cost plus incentive Hanford Reservation: River Protection Hanford Tank Waste Treatment and Immobilization Plant (01-D-416) Radioactive Liquid Tank Waste Stabilization & Disposition (ORP-0014) Interim Tank Retrieval System (94-D-407) Tank Farm Restoration and Safe Operations (97-D-402) Hanford Reservation at Richland, Washington Nuclear Facility Decontamination and Decommissioning—Fast Flux Test Facility Project (RL-0042) Nuclear Facility Decontamination and Decommissioning—Remainder of Hanford (RL-0040) Nuclear Facility Decontamination and Decommissioning—River Corridor Closure Project (RL-0041) Nuclear Material Stabilization & Disposition – PFP (RL-0011) Soil and Water Remediation – Vadose Zone (RL-0030) Cost plus award Solid Waste Stabilization & Disposition—200 Area (RL-0013) Cost incentive or cost constraint for individual project in contract? Schedule performance incentive for individual project in contract? Fee available for individual project? Idaho National Engineering and Environmental Laboratory Advanced Mixed Waste Treatment Facility (97-PVT-2) Fixed price Spent Nuclear Fuel Dry Storage (98-PVT-2) East Tennessee Technology Park Three-Building D&D and Recycle Project (OR-0040) Facilities Capability Assurance Program (88-D-122-27 & 88-D-122-42) Rocky Flats Facility at Denver, Colorado Nuclear Facility D&D/North Side Facility Closures (RF-0040) Nuclear Facility D&D/South Site Facility Closures (RF-0041) Nuclear Material Stabilization & Disposition (RF-0011) Cost plus incentive Solid Waste Stabilization & Disposition (RF-0013) Microsystems and Engineering Science Application (01-D-108) High-Level Waste Removal from Filled Waste Tanks (SR-0014C) Cost incentive or cost constraint for individual project in contract? Schedule performance incentive for individual project in contract? Fee available for individual project? Although the September 2004 PARS report showed that this project would cost less than $400 million, we included it in our review because it was included in our 2002 review of DOE’s major projects. (GAO, Contract Reform: DOE Has Made Progress, but Actions Needed to Ensure Initiatives Have Improved Results, GAO-02-798 (Washington, D.C.: Sept. 13, 2002).) This project was designated as a major project when DOE’s threshold was $100 million. NNSA stated that each of these life extension projects involved multiple management and operating contractors (not 1 contract) in multiple locations, which is different from every other project that is listed in this appendix. Our review focused primarily on 33 major projects that had passed, as of March 2004, the Department of Energy’s (DOE) Critical Decision 2 milestone—the point at which the department approves a project’s cost, schedule, and scope baselines on the basis of an approved conceptual design report and acquisition strategy. The projects we reviewed include 28 projects that cost more than $400 million each and 5 projects that our 2002 assessment defined as major projects because their total costs exceeded $100 million each. Our review did not include 46 major projects that, as of March 2004, had not passed the Critical Decision 2 milestone. Since March 2004, at least 6 major projects have passed the Critical Decision 2 milestone and now have approved baselines. The remaining major projects do not have approved baselines for measuring performance. To assess DOE’s use of performance incentives in contracts to effectively control cost and maintain schedules, we reviewed relevant requirements in the Federal Acquisition Regulation (FAR) and the DOE Acquisition Regulation, as well as DOE Order 413.3, DOE manual 413.3-1, and DOE’s Acquisition Guide, to obtain information on the factors that should be used in determining a contractor’s fee. Through this effort, we identified whether the department provided guidance on the appropriate circumstances for using each contract type and the appropriate factors for determining a contractor’s fee. In particular, we examined requirements regarding contract provisions for award fees; cost, schedule, and performance incentives; and fee determination plans. We then compared government and departmental requirements with project-specific elements found in the contracts for each of the 33 major projects that have DOE-approved cost, schedule, and scope baselines to determine whether DOE has used appropriate (1) types of performance incentives, such as cost or schedule incentives, and (2) fee determination plans and fee payments. For instance, to assess whether DOE’s contracts used the appropriate incentives for each of three types of contracts, we compared the types of incentives that DOE’s contracts and relevant modifications used for each of the 33 major projects with the types of incentives that the FAR and the DOE Acquisition Regulation, as well as departmental orders and guidance, require. We then interviewed cognizant DOE officials to discuss reasons for the inconsistencies we found. In addition, we examined various contract-related documents associated with the 33 major projects we reviewed, such as the “Gold Chart” metrics that Environmental Management uses to measure its progress in DOE’s annual budget submission to the Congress. Specifically, we compared the Gold Chart’s performance metrics for each of Environmental Management’s 25 major projects with the performance measures in each projects’ contract. Where differences were identified, we discussed the contents of the Gold Chart and the associated projects’ contract with appropriate DOE contracting officials. Furthermore, we interviewed officials in DOE’s Office of Contract Management and officials in the Office of Engineering and Construction Management to determine the extent to which DOE had reviewed, prior to award, the contracts for the 33 major projects to ensure that they included appropriate project management provisions. To assess the reliability of the data DOE uses to monitor and assess contractor performance, we reviewed the Office of Management and Budget’s (OMB) directives, DOE’s Reference Book for Contract Administrators, and other DOE documents and studies to identify relevant requirements and departmental guidance. We identified the roles and responsibilities of contract administration officials and examined the extent to which these officials adhered to their responsibilities. More specifically, we reviewed the department’s recent contract administration self-assessments and the frequency with which they were conducted. In so doing, we examined the department’s recommendations for improving contract administration and determined whether the recommendations were followed. If they were not followed, we discussed the reasons with cognizant officials in the Contract Administration Division. We also examined DOE’s order for acquisition career development, and other related DOE directives, to assess training requirements for DOE’s contracting officers and contracting officer representatives, particularly regarding training in earned value management principles. To determine the reliability of Project Analysis and Reporting System (PARS) data used by senior managers for project oversight, we assessed the accuracy, completeness, and timeliness of PARS data. To assess the accuracy of the project performance data in PARS, we did the following: Reviewed DOE Order 413.3, “Program and Project Management for the Acquisition of Capital Assets,” and its implementing guidance; OMB Circular A-11, part 7, “Planning, Budgeting, Acquisition, and Management of Capital Assets”; and various documents outlining the requirements in American National Standards Institute/Electronic Industries Association-748-1998, which defines the requirements for earned value management—the component of contractors’ project management systems critical for producing reliable project performance data. Interviewed cognizant DOE officials in the Office of Engineering and Construction Management, the Office of Environmental Management, the Office of Science, and the National Nuclear Security Administration on the extent to which the performance data that DOE contractors’ project management systems produced for PARS met earned value management requirements. These officials included a DOE expert in earned value management, who is responsible for assessing the accuracy of the data that various projects’ systems produce. Where specific deficiencies in a contractor’s project management system were identified, we obtained relevant documents from the appropriate acting DOE project director and analyzed whether the contractor generated project performance data in accordance with the industry standard. We also interviewed officials in two other major contracting agencies—the Department of Defense and the National Aeronautics and Space Administration—about their experience in implementing earned value management requirements. Compared data in monthly PARS reports provided to senior DOE managers from January through September 2004 with project-specific cost and schedule data obtained from earlier PARS reports, cognizant program offices, project status reports, Inspector General reports, and external reviews. When we identified total cost or project performance data discrepancies between PARS and these other sources, we contacted relevant project officials to determine their cause. Identified the extent to which contractor-generated data in PARS were sufficiently reviewed and verified by DOE by (1) identifying requirements in DOE Order 413.3 and its implementing guidance for the departmental review and verification of contractor project performance data and (2) interviewing DOE project management officials to determine whether the current breadth of review was adequate and what plans, if any, DOE had for increasing the rigor of its review and verification of contractor data. To assess the completeness of PARS data, we determined whether the PARS database included major DOE activities—those costing more than $400 million or that DOE management had designated—identified in our prior reports, Inspector General reports, DOE press releases, and printouts from DOE’s Management Accounting and Reporting System. For projects that were not included in PARS, we contacted headquarters project management officials to determine if the projects met the criteria for PARS reporting. For projects that were included in PARS, we examined the completeness of reported data in various data fields by reviewing printouts from the PARS database and by reviewing the reports of the National Academies’ National Research Council and the Civil Engineering Research Foundation, which also examined the completeness of PARS data. In addition, we reviewed a 2004 report by the U.S. Army Corps of Engineers on a major project at Hanford, Washington. Moreover, we discussed options with DOE officials for reporting additional data that would improve PARS’ ability to enable senior DOE managers to identify potential cost or schedule challenges. To assess the timeliness of PARS data, we reviewed PARS monthly reports to senior DOE managers and identified projects whose performance data were out-of-date. For many of these projects, we talked to headquarters and project officials to determine the reasons for delay and explored options with them on how timeliness could be improved. We also interviewed numerous acting DOE project directors to learn how data from their project management systems were summarized and incorporated into the PARS database. In addition, we explored options with DOE headquarters and project officials for improving the timeliness of all data reported in PARS. Given our review of the documentation provided by DOE and our discussions with DOE officials, we have reservations about the reliability of PARS data. These issues are discussed in this report. We conducted our work from January 2004 through January 2005 in accordance with generally accepted government auditing standards, which included an assessment of data reliability and internal controls. Nuclear Waste: Absence of Key Management Reforms on Hanford’s Cleanup Project Adds to Challenges of Achieving Cost and Schedule Goals. GAO-04-611. Washington, D.C.: June 9, 2004. Nuclear Waste Cleanup: DOE Has Made Some Progress in Cleaning Up the Paducah Site, but Challenges Remain. GAO-04-457. Washington, D.C.: April 1, 2004. Nuclear Weapons: Opportunities Exist to Improve the Budgeting, Cost Accounting, and Management Associated with the Stockpile Life Extension Program. GAO-03-583. Washington, D.C.: July 28, 2003. Nuclear Waste: Challenges and Savings Opportunities in DOE’s High- Level Waste Cleanup Program. GAO-03-930T. Washington, D.C.: July 17, 2003. Nuclear Waste: Challenges to Achieving Potential Savings in DOE’s High-Level Waste Cleanup Program. GAO-03-593. Washington, D.C.: June 17, 2003. Department of Energy: Status of Contract and Project Management Reforms. GAO-03-570T. Washington, D.C.: March 20, 2003. Major Management Challenges and Program Risks: Department of Energy. GAO-03-100. Washington, D.C.: January 1, 2003. High-Risk Series: An Update. GAO-03-119. Washington, D.C.: January 1, 2003. Contract Reform: DOE Has Made Progress, but Actions Needed to Ensure Initiatives Have Improved Results. GAO-02-798. Washington, D.C.: September 13, 2002. Nuclear Waste: Technical, Schedule, and Cost Uncertainties of the Yucca Mountain Repository Project. GAO-02-191. Washington, D.C.: December 21, 2001. Department of Energy: Follow-Up Review of the National Ignition Facility. GAO-01-677R. Washington, D.C.: June 1, 2001. Nuclear Cleanup: Progress Made at Rocky Flats, but Closure by 2006 Is Unlikely, and Costs May Increase. GAO-01-284. Washington, D.C.: February 28, 2001. High-Risk Series: An Update. GAO-01-263. Washington, D.C.: January 1, 2001. Major Management Challenges and Program Risks: Department of Energy. GAO-01-246. Washington, D.C.: January 1, 2001. | The Department of Energy (DOE) pays its contractors billions of dollars each year to implement its major projects--those costing more than $400 million each. Many major projects have experienced substantial cost and schedule overruns, largely because of contract management problems. GAO was asked to assess, for major departmental projects, (1) DOE's use of performance incentives to effectively control costs and maintain schedules, (2) the reliability of the data DOE uses to monitor and assess contractor performance, and (3) the reliability of the Project Assessment and Reporting System (PARS) data that senior managers use for project oversight. DOE could use performance incentives more effectively for controlling costs and schedules if it developed performance incentive guidance and assigned responsibility for reviewing a contract's project management provisions prior to award. DOE has awarded contracts for 15 of 33 major projects that use a schedule or other performance incentive without an associated cost incentive or constraint; thus a contractor could receive full fees by meeting all schedule baselines while substantially overrunning costs. DOE has relied on unvalidated contractor data to monitor contractors' progress in executing major projects and to award fees for performance. In particular, DOE's self-assessment of contract administration in 2002 found that field personnel overly relied on contractors' accounting systems and contractor-collected data in assessing performance, without significant validation of those data. No subsequent self-assessment has been conducted to determine if this problem continues. Furthermore, DOE has not required that its contracting officers receive the training needed to assess the adequacy of contractors' project management systems that generate data used to monitor progress. A lthough development of PARS is a positive step, the reliability of the project performance data that PARS provides to senior DOE managers is limited by problems with accuracy, completeness, and timeliness. Regarding accuracy, DOE has not assessed the reliability of contractors' project management systems that feed data into PARS for 31 of 33 major projects, even though DOE believes that some systems are deficient. Regarding completeness, GAO identified 3 major projects that are not in PARS. As to timeliness, cost and schedule data for 6 major projects in the June 2004 PARS report were significantly out of date because DOE has not required contractors to submit timely performance data. These contract management problems limit DOE's ability to effectively manage its major projects and avoid further cost and schedule slippages. |
The mission of IRS’s collection program, as set forth in the fiscal year 2015 collection program letter, is “to collect delinquent taxes and secure delinquent tax returns through the fair and equitable application of the tax laws, including the use of enforcement tools when appropriate, provide education to customers to enable future compliance, and thereby protect and promote public confidence in the American tax system.” IRS collects unpaid tax debts through a complex, three-phase process: (1) a notice phase, (2) a telephone phase (ACS), and (3) an in-person phase (Field collection). While these phases are not necessarily sequential (for example, a case could go directly from the notice phase to Field collection), with few exceptions every collection case is required to go through the notice phase. Of cases initiated in fiscal year 2014, more than two-thirds were resolved in the notice phase. If a case is not resolved during the notice phase, it is sent to the automated Inventory Delivery System (IDS), where the next step is determined. IDS will (1) identify and filter out uncollectible cases (i.e., removed from active collection status through a process known as shelving), (2) categorize some cases as high risk, and (3) determine whether cases should be routed to either ACS or Field collection to potentially be worked. See figure 1. To make these determinations, IDS considers hundreds of factors about a case while carrying out two activities that facilitate closures, prioritization, and routing: Modeling is a statistical process that analyzes the results of previously closed cases to predict likely case outcomes. In IDS, this process helps determine if a case should be shelved if a model predicts it would not be collectible. Risking determines whether a case is high risk and influences case routing and shelving decisions. For example, cases where a taxpayer owes a large amount of money would be considered high risk and would be a priority for selection. The outputs from the modeling and risking activities are used in conjunction with hundreds of business rules to determine where to route cases for further collection actions. For cases routed to ACS and Field collection, the predictive modeling results from IDS are transmitted and used again to further prioritize cases. Cases are sent to ACS for several reasons, the two most common of which, based upon IRS data, are: Taxpayer case already established in ACS: Taxpayer already has one or more delinquency issues that are being pursued in ACS. Default routing rules: No other IDS rule for routing cases to Field collection or for shelving cases proves applicable, and therefore by default the module is sent to ACS. Once a collection case is sent to ACS, it is either established as a new case or added to an existing case on a taxpayer. IRS pursues ACS collection cases by taking one or more actions, including reminding taxpayers of their tax delinquency through automated outgoing calls and letters, as well as placing liens on property or levying wages or assets. These actions may prompt taxpayers to call ACS to attempt to resolve their cases. If collection cases in ACS do not contain up-to-date contact information for the taxpayer, or information about sources that IRS could levy, IRS collection representatives will search for such information. If they find it, the case enters the pool of cases on which a potential action could be taken. ACS attempts to resolve two types of collection delinquencies: (1) balance due cases, in which a taxpayer has a tax liability to IRS, and (2) nonfiler cases, in which a taxpayer has an unfiled return. One case may include both types of delinquencies. Balance due cases can be resolved by the taxpayer paying the outstanding tax debt in full, or arranging with IRS to pay the full or partial outstanding tax debt over time (known as an installment agreement). However, these cases could also be closed as currently not collectible if ACS collection representatives are unable to locate or contact the taxpayer, or if the taxpayer is facing economic hardship or unable to pay, among other reasons. Some cases closed as currently not collectible may reenter the collection inventory if IRS determines that in the future, the taxpayer will be able to pay some of the tax debt. Unfiled return cases may be resolved if the taxpayer files the delinquent return, or if the taxpayer is no longer liable for the unpaid taxes during the period in question, among other reasons. If the collection case is unresolved by ACS, it may move to Field collection to be pursued further, or it could be shelved. In November 2014, IRS began realigning collection operations across its Wage & Investment (W&I) and Small Business/Self-Employed (SB/SE) business operating divisions. According to IRS, the realignment will increase efficiency, reduce redundancies, and position IRS to improve identification of emerging compliance issues. Previously, ACS operations were split with W&I handling collections against individual taxpayers, and SB/SE handling individuals with business income and losses as well as all business entity taxpayers. As part of the realignment, IRS is consolidating all ACS call and support sites within SB/SE. At least through fiscal year 2015, ACS will operate separate phone numbers for W&I and SB/SE, and all sites will continue to handle the same type of taxpayers as they did prior to the realignment. IRS officials said that they plan to consolidate the management of its telephones by fiscal year 2016 to align with the new organizational structure. The ACS prioritization and selection process consists of three steps, as shown in figure 2. These steps are automated—that is, they do not involve manual intervention—and occur within seconds of each other. IRS managers responsible for ACS case prioritization and selection cannot change these steps without guidance from IRS collection program executives and input from IRS’s information technology staff. Overall, the ACS prioritization and selection process is set up so that high-priority cases are worked first. These include IRS program collection priorities and cases that have a greater potential to result in full payment or an installment agreement based on IRS’s predictive models discussed above. Documentation of the process is limited (an issue we discuss later in this report); therefore we based our description below largely on our reviews of the documentation that does exist, on multiple interviews with IRS officials, and on our observations during two visits to the ACS call site in Philadelphia. The first step in the ACS prioritization and selection process is to assign a case to one of three inventory groups: (1) special inventories, (2) model priority inventories, and (3) the general population inventory. During this step, ACS first assesses cases to determine whether they fall within one of ACS’s five special inventories, four of which align with IRS’s collection program priorities. ACS will look at the case characteristics, such as from which IRS division the case originated, to assign it to a special inventory. ACS works all but one of the special inventories at specific call sites where staff possess the expertise to work the cases. See table 1. Next, ACS assesses the remaining cases to determine whether they qualify for assignment to one of the business model priority inventories. These inventories contain cases that IRS predicts as having a probability of resulting in full payment or installment agreements, among other outcomes. To make this determination, ACS uses the predictive model scores generated by IDS. If the model score on the case is above a specified threshold, ACS will assign the case to the corresponding model priority inventory. ACS assigns any remaining cases not placed into one of the special inventories or model priority inventories to the general population inventory. Simultaneously with these actions, ACS assigns each case a risk category of high, medium, or low. ACS bases this assignment on the case’s characteristics, such as its age, the dollar value of the outstanding balance, and the type of return the taxpayer filed or failed to file, among others. According to IRS officials, the risk categories have remained largely unchanged since their inception in 2000. Many of the risk categories contain dollar thresholds, which have also remained unchanged; for example, if a taxpayer has a delinquent balance due that is within or above a specified amount, ACS will assign it to the corresponding risk category. In addition, ACS uses risk categories to determine how long to retain cases in ACS before sending them to the queue or shelving them. ACS sends high- and medium-risk cases to the queue after 26 weeks. Low-risk cases are not sent to the queue but are shelved after 104 weeks. IRS officials said that ACS retains low-risk cases for longer because ACS staff have to work through high- and medium-risk cases before they can get to low-risk cases. These officials added that if low-risk cases are not resolved in ACS and subsequently sent to the queue, it is unlikely that they would get worked by revenue officers in Field collection. After ACS assigns a case to an inventory and a risk category, the case receives a priority level, which determines the order in which ACS works the case within each inventory. ACS uses the assigned inventory, risk category, and the predictive model scores unique to each case to assign it a priority level. Figure 3 illustrates how ACS sets the priority level for business taxpayers. The ACS process for setting the priority level for individual taxpayers is similar except that it prioritizes high-income nonfiler and FERDI cases, among others, first. Following the IRS realignment, which is consolidating all ACS call sites within SB/SE, IRS officials said that their goal is to integrate the case prioritization and selection processes used for SB/SE and W&I into one approach during fiscal year 2016. IRS collection program officials said that they would like to continue to expand IRS’s use of the IDS model scores to prioritize cases within ACS. For example, ACS prioritizes cases into priority levels of 0 to 5, but does not prioritize the cases within each level. According to IRS collection program officials, using the model scores more robustly would allow IRS to better select among cases that have the same priority level or follow-up date. However, collection program officials have no plans to use the predictive model scores as the sole factor to prioritize cases as this would ignore the collection program priorities discussed above. In the third step, ACS assigns each case to a function unit. Function units are holding bins for cases while they wait for action by either ACS collection representatives or the system itself (which acts automatically). Generally, these actions include contacting taxpayers, taking enforcement actions, such as liens and levies, or investigating cases for contact information. After segregating cases by inventory and function unit, ACS works cases in a specific order, beginning with priority level, then by follow-up date, and finally by taxpayer identification number in ascending order. Although the ACS prioritization process is largely automated, IRS managers responsible for case prioritization and selection have some discretion in choosing the number and type of cases worked to ensure that ACS meets two key performance measures: (1) the number of balance due and nonfiler case closures and (2) the level of service, which measures the quality of collection representatives’ interactions with taxpayers who call into ACS. To ensure ACS meets these measures, ACS managers consider: (1) how collection representatives’ time is spent; (2) how many notification and enforcement actions to initiate; and (3) how many and which cases to load for collection representatives to work. ACS managers first balance the time collection representatives spend researching cases with the time they spend answering phone calls from taxpayers. In fiscal years 2013 and 2014, collection representatives spent 77 percent of their time answering phone calls, up from 66 percent in fiscal year 2012, and thus spent less time researching cases, including those that may be of higher priority. Representatives spent more time answering phones primarily because in fiscal year 2014 there were 20 percent fewer collection representatives (in terms of full-time equivalents) in ACS than in fiscal year 2012. ACS managers also balance the number of notification and enforcement actions—such as outgoing calls, letters, liens, and levies—with the expected number of taxpayers who will call in response. ACS managers target an expected call volume that allows collection representatives to answer taxpayers’ calls in a timely manner. This helps ensure that ACS meets targets for level of service. Over the previous 3 fiscal years, ACS issued fewer notification and enforcement actions because of the declining number of collection representatives spending an increasing share of their time answering calls. For example, IRS issued 37 percent fewer levies and 31 percent fewer letters between fiscal year 2012 and 2014. Finally, ACS managers assess how many cases to upload for collection representatives to work. Managers track the number of cases and how long cases are residing in function units to ensure that older cases are worked by collection representatives in a timely manner. If the managers determine cases are residing in function units too long, they upload a block of cases from those function units to ACS’s inventory management tool. Each collection representative accesses the inventory management tool and works the next case available based on the priority level and oldest follow-up date. This process thus precludes collection representatives from selecting individual cases. In fiscal year 2014, almost half of the 3.5 million cases closed or transferred out of ACS were high-priority cases, such as high-income nonfiler and trust fund cases. See figure 4. Furthermore, three-quarters of cases closed in ACS in fiscal year 2014 were for taxpayers who had a balance due outstanding with IRS (discussed further below). See appendix III for data on nonfiler cases for taxpayers who failed to file tax returns. Table 2 shows the mix of closed cases by inventory type. About nine percent were in one of ACS’s five special inventories, which align with IRS collection program priorities. About one-third of cases were from the model priority inventories, in which IRS predicts the case has a certain potential to result in full payment or an installment agreement, among others, and the remaining cases were in ACS’s general inventory. Broken out by type of taxpayer, about 60 percent of the balance due and nonfiler cases ACS closed in fiscal year 2014 were business taxpayers. These taxpayers included (1) business entities, such as corporations; (2) businesses which failed to file or remit fully their employment taxes; and (3) individuals with business income or losses. Of the 3.52 million cases closed in or transferred out of ACS in fiscal year 2014, about 1.76 million cases (50 percent) were either resolved by IRS when the taxpayer paid the tax liability in full, or established an installment agreement to pay the liability partially or in full, or when IRS secured the delinquent return. IRS collected almost $6.2 billion in delinquent revenue for the federal government from those cases closed in fiscal year 2014. Another 1 million cases (29 percent) were transferred out of ACS to another location in the IRS collection program. Finally, about 373,000 cases (11 percent) were closed as currently not collectible or shelved, and the remaining 380,000 cases (11 percent) were closed for other reasons, such as being sent to IRS exam to potentially audit the taxpayer’s return. In fiscal year 2014, IRS generally performed better at closing high-priority individual balance due cases than it did at closing these types of business cases. Figure 5 shows the outcome of these types of closed cases. Of cases closed in ACS in fiscal year 2014, the median number of days high-priority individual and business balance due cases were open was 259 and 196 days, respectively. Of cases closed in fiscal year 2014, IRS closed 85 percent of business high-priority balance due cases within the first year they were open, whereas 61 percent of individual high- priority cases were closed within the first year. An effective internal control system can help federal agencies achieve their missions and objectives and improve accountability. As set forth in Standards for Internal Control in the Federal Government, also known as the Green Book, internal controls comprise the plans, methods, and procedures used to meet an entity’s mission, goals, and objectives, which support performance-based management. Internal controls help agency program managers achieve desired results and provide reasonable assurance that program objectives are being achieved through, among other things, effective and efficient use of agency resources. Internal control is not one event, but rather an ongoing series of actions and activities that occur throughout an entity’s operations. Two examples of internal control standards are the establishment of clear, consistent objectives and a commitment to documenting significant events. Internal control standards can serve as tools to help IRS management ensure that ACS contributes to the collection program’s mission of collecting delinquent taxes and securing delinquent tax returns through the fair and equitable application of the tax laws. However, when we compared IRS’s processes to these standards, we found that they were deficient in some areas, thereby increasing the risk that ACS activities may not fully contribute to the collection program’s mission. According to internal control standards, having clearly documented and communicated program objectives is a precondition of any further internal control activity, such as risk assessment, and is key to helping entities meet their mission, objectives, and goals. IRS officials responsible for the collection program and ACS were unable to produce documentation regarding collection program or ACS objectives. While during an interview with us, collection program executives described the objectives of ensuring adequate coverage of different types of collection cases and maximizing the revenue collected by IRS, neither of these concepts were clearly documented and communicated to IRS staff. We found elements of what could be developed into program objectives in various program documents—such as the collection program letter, which includes the collection mission, and collection policy statements. However, none of these documents were identified by IRS officials as establishing program objectives. IRS officials also pointed to a variety of established performance goals and measures used within the collection program as objectives. However, according to internal control standards and the GPRA Modernization Act of 2010 (GPRAMA), performance goals and measures are not objectives, but rather should be used to assess whether an entity is achieving its objectives, and are ultimately derived from those objectives. The concept of fairness is of central importance to the collection program, as reflected in IRS’s and the collection program’s mission statement. However, the term “fairness” was not defined or operationalized in any ACS or collection program documents as it relates to case selection. IRS officials did share a number of viewpoints of how fairness could be defined. While these informal views on fairness may have meaning to some, they do not constitute an institutional definition of fairness, nor are they substitutes for documented objectives that are accessible and can be communicated to staff. One IRS official responsible for collection case selection processes offered a definition of fairness as “treating like taxpayers alike”—that is, that taxpayers with similar characteristics face an equal likelihood of being selected for collection. Another IRS official said ACS is fair because it is a “next case” system—that is, cases are automatically prioritized and selected for ACS staff to work, without IRS staff having any role in selecting them. According to this IRS official, fairness is “built into the system.” Another IRS official added that while IRS does not define and document fairness in the IRS collection program letter, IRS details the procedures and processes in the Internal Revenue Manual for how IRS staff are supposed to deal with taxpayers calling into ACS. He said that consistently following these procedures constitutes treating taxpayers fairly. The absence of clearly documented objectives and a clearly communicated definition of fairness present a number of challenges for IRS. First, without clearly formulated and communicated program objectives, IRS cannot know how well ACS contributes to the collection program mission and is not able to effectively assess the risks ACS may face or its overall effectiveness. Further, without documentation, a key concept like fairness may be open to multiple interpretations by ACS management and staff, as well as by the public. Ensuring that the multistep ACS prioritization and selection process is documented will allow IRS to communicate the concept of fairness consistently and reduce the risk that the case selection and prioritization process is perceived as unfair. Internal control standards state that management needs to identify and analyze relevant risks associated with achieving the objectives. In addition, management needs to decide how to manage those risks and what actions should be taken in response. IRS officials draw on a mix of daily, weekly, monthly, and ad hoc meetings as their means of identifying and managing operational risks, which could affect the functioning of ACS. An IRS official illustrated one example of risk identification in which the agency’s process identified that some levy notices contained incorrect tax liability information. This error presented a risk that ACS might systematically supply taxpayers or third parties with incorrect information. She also described the steps IRS took to address this risk. These included working with IRS information technology staff to move the affected cases out of function units that issue letters and levies, as well as issuing an alert to inform relevant ACS staff of what was going on, in case they received questions from taxpayers. ACS managers responsible for case prioritization and selection also have quality reviews and other processes to review ACS operations. For example, program reviews, among other things, ensure ACS collection representatives work cases according to IRM procedures. In fiscal year 2014, each of the seven program reviews completed at SB/SE call sites identified areas for improvement in case processing and outlined corrective actions each call site should take in response. These reviews check compliance with current procedures but do not assess the ACS case prioritization and selection process, according to IRS officials. IRS managers, including those in ACS, are to complete annual assessments of the effectiveness of controls within their own areas of responsibility, which include identifying and reporting risks. Additionally, IRS is implementing an Enterprise Risk Management (ERM) program, which will consider risks more systematically across IRS. The SB/SE Commissioner chartered a risk committee for SB/SE, which provides input into the overall IRS ERM process. The collection program also has a risk council, which provides input to the SB/SE Risk Committee. ACS is represented on this council by the Director of Campus Collections, who oversees the various ACS call sites that are spread across the country. As of April 2015, SB/SE’s preliminary risk register listed 31 distinct risks, including some collection-related risks. IRS plans to periodically update the risk register and monitor the risks included on the register. IRS is also to develop and take actions in response to the risks listed on the register. In addition to these actions, ERM has procedures whereby ACS staff and management can elevate risks for management and executives to consider. The implementation of the ERM process in SB/SE is still in its initial stages with the preliminary risk register created in April 2015 and the first risk collection council meeting held that same month. As a result, it is too early to determine the effectiveness of the ERM in identifying and managing risk. As noted above, ACS managers responsible for case prioritization and selection track performance measures, such as the number of balance due and nonfiler case closures as well as level of service. IRS officials responsible for ACS review these measures in daily, weekly, and monthly meetings, which provide information on how ACS is functioning and on whether it is meeting the targets established for its key measures. Internal control standards require management to establish activities to review performance measures and indicators, as well as to compare actual performance to planned or expected results. ACS managers responsible for case prioritization and selection monitor and review a range of reports. These reports provide data on staffing, total ACS dollars collected, enforcement activities (e.g., liens and levies), and customer satisfaction, among other measures. Collection program executives review similar ACS measures, as well as information on enterprise collection priorities, some of which ACS pursues. IRS has established a management infrastructure for both assessing risk and monitoring performance. However, the lack of clearly documented collection program and ACS objectives undercuts its effectiveness. Without clearly documented program objectives, IRS cannot ensure that its risk assessment processes effectively identify and analyze relevant risks associated with achieving objectives. ERM’s training for managers reinforces the importance of understanding objectives at various levels across IRS and establishing a risk management process to minimize the effects of risk to the accomplishment of those objectives. Similarly, IRS cannot know the extent to which ACS performance measures align with or contribute to the collection program mission without deriving measures from established objectives. Internal control standards state that such controls need to be clearly documented and that documentation should be readily available for examination. However, IRS has incomplete documentation to describe ACS’s process for assigning a case to an inventory and risk category, priority level, and function unit. The information we used to outline the ACS case prioritization and selection process discussed above was mostly told to us by IRS officials over a series of discussions, rather than by clear and comprehensive documentation. IRS was able to provide documentation on the case characteristics that apply to each risk category and the IRM section that explains ACS operations, including information about how cases are assigned to function units. However, this documentation does not provide an overview of the multistep ACS case prioritization and selection process. Using screen shots from their computer system, IRS officials were able to demonstrate how ACS assigns a case to an inventory and a priority level. However, the screen shots did not stand alone without explanation and input from IRS officials. IRS was able to provide documentation about how the process has changed overtime, such as information technology requests to implement the use of the predictive model scores and alter the model priority inventories in ACS. IRS officials acknowledged that they have little formal documentation along these lines to comprehensively describe the ACS process. In lieu of documentation, IRS has to rely on the institutional knowledge of management and staff to be able to describe the process. For example, key IRS staff have been employed in ACS since at least 2000, when ACS began using the risk categories. The officials also acknowledged that the process needs to be written out to affect a smooth transition to future staff and management. Indeed, IRS officials told us that some ACS staff will soon be eligible for retirement, which highlights the importance for IRS to document the multistep ACS process. Without adequate documentation, it is also difficult to determine whether the ACS case prioritization and selection process effectively supports collection program and ACS missions and objectives. Furthermore, as IRS realigns ACS collection operations within SB/SE, having baseline documentation of the process as structured would assist IRS in communicating the process to staff and in making decisions about how best to consolidate ACS. Finally, the absence of a fully documented prioritization process may make it difficult for IRS to defend against accusations that it is not following its collections procedures, since these procedures are not documented and cannot be communicated to parties inside and outside of IRS. According to internal control standards, separate evaluations of controls can be useful by focusing directly on the controls’ effectiveness at a specific time. The scope and frequency of separate evaluations should depend primarily on the assessment of risks and the effectiveness of ongoing monitoring procedures. ACS has no procedures or structure for regularly completing periodic evaluations of the ACS case prioritization and selection process, including its use of the special inventories, risk categories, predictive models, and priority levels. For example, according to IRS officials, the ACS risk categories have been relatively static since their creation in 2000, and dollar values within those risk categories have not been adjusted for inflation. While some changes have been made to the ACS case prioritization and selection process, these changes were based on ad hoc studies or recommended by management based on changes in priorities, rather than the result of regular and periodic evaluations of the ACS case prioritization process. For example, as a result of the IRS Collection Process Study completed in 2010, IRS decided to retain high-risk cases longer in ACS to ensure those cases get worked. IRS officials said that while they have no procedures to regularly or periodically evaluate ACS case prioritization, such evaluations could provide useful information for consolidating ACS operations in SB/SE, in light of the IRS realignment. In addition, according to IRS officials, the realignment is serving as a catalyst to review and enhance the ACS process in SB/SE, such as reviewing and enhancing the use of the risk categories and revisiting the amount of time cases are retained in ACS. IRS’s December 2014 Collection Workload Optimization Project (CWOP) produced a number of findings and recommendations for how ACS could better manage and prioritize cases. For example, CWOP recommended that ACS stop prioritizing cases by follow-up date because selecting cases by the oldest follow-up date may result in ACS selecting cases with less collection potential. In our discussions about the status of the CWOP recommendations, IRS officials said that while CWOP remains an ongoing effort, they do not have a plan or time frame in place for next steps and corrective actions in response to the report. IRS officials stated that they have been unable to take action with regard to certain CWOP recommendations due to scarce resources and competing priorities for IRS’s information technology services. Internal control standards note that managers are to (1) promptly evaluate findings from audits and other reviews, including those showing deficiencies and recommendations reported by auditors and others who evaluate agencies’ operations, (2) determine proper actions in response to findings and recommendations from audits and reviews, and (3) complete, within established time frames, all actions that correct or otherwise resolve the matters brought to management’s attention. Without periodically reviewing and evaluating the ACS case prioritization and selection process and ensuring that findings from evaluations, such as CWOP, are addressed and corrective action is taken where necessary, ACS may be missing opportunities to better prioritize its workload and improve collection results. For example, CWOP recommends that ACS prioritize collection cases in each of its inventories by the model scores rather than by the priority level codes. According to IRS, this would help ACS select the cases with the greatest collection potential. Without such evaluations, IRS may also not be able to ensure that ACS case prioritization is working as intended and may be missing opportunities to more effectively align the ACS case prioritization process with IRS’s strategic objectives and with collection program and ACS objectives, once developed. In addition, if the prioritization and selection process is not periodically evaluated over time, it could lose its value and usefulness. For example, IRS does not know how the static dollar thresholds for the ACS risk categories affect the composition of cases assigned certain priorities and risks over time. Outdated dollar thresholds may no longer be serving their intended purpose of identifying high-risk or priority cases as they have remained static while taxpayer incomes have increased over time, potentially changing ACS’s composition of cases. ACS is one of IRS’s primary enforcement tools for compelling noncompliant taxpayers to file their tax returns and pay their taxes. ACS ensures millions of taxpayers do just that, which results in billions of dollars being collected annually for the federal government, thereby helping to address the tax gap and encouraging future voluntary compliance. ACS’s process for prioritizing and selecting cases helps to ensure that high-priority taxpayers, including federal employees and retirees, high-income nonfilers, and large corporations, as well as taxpayers who have a higher probability of paying their taxes in full, comply with the tax laws. But the absence of key management controls— objectives, documentation, and procedures to complete periodic evaluations—creates multiple challenges for IRS. Without clearly documented objectives, IRS cannot know if ACS is meeting its mission and the agency will not be able to manage risk or monitor performance as well as it otherwise could. The lack of clear and comprehensive documentation on ACS’s multistep case prioritization and selection process risks that it will not be communicated consistently to parties inside and outside of IRS. IRS has relied on institutional knowledge from experienced staff, some of whom are now retirement eligible. However, the risk of inconsistently communicating the ACS process increases as subsequent IRS employees work within ACS. By not periodically evaluating how the ACS process is structured (or acting on the findings of the ad hoc evaluation that was conducted), IRS is missing opportunities to enhance ACS’s effectiveness. Moreover, the absence of these controls could affect IRS’s ability to successfully consolidate W&I and SB/SE ACS call sites as part of the ongoing IRS realignment. Lastly, IRS risks the appearance that the ACS prioritization and selection process is unfair to taxpayers because IRS is unable to communicate key pieces of information, such as its definition of fairness, to the public. To help ensure the IRS collection program meets its mission and selects cases fairly, we recommend that the Commissioner of Internal Revenue take the following four actions related to ACS: 1. Establish, document, and implement objectives for the collection program and ACS, and define the key term of “fairness” as it applies to collection activities, which can be communicated to IRS staff. 2. Establish and implement clear guidance and documentation for the ACS case prioritization and selection process, including inventory, risk, and priority designations, as well as changes to those designations over time, and communicate them to appropriate IRS staff. 3. Establish, document, and implement procedures to complete periodic evaluations of the ACS case prioritization and selection process and structure. The evaluation should cover the composition of the risk categories, model thresholds, and dollar thresholds used to prioritize cases. 4. Establish, document, and implement a plan and time frame to ensure follow-up for ad hoc evaluations of the ACS case prioritization and selection process. We provided a draft of this report to the Commissioner of Internal Revenue for review and comment. The Deputy Commissioner for Services and Enforcement provided written comments dated August 18, 2015, which are reprinted in appendix V. IRS stated that it agrees with the importance of sound internal controls and is committed to their improvement, especially in the areas we recommended. To that end, IRS noted actions that it has taken to improve collection performance in fiscal years 2014 and 2015, including updates of analytical models and the realignment of the collection program to be entirely within SB/SE. IRS acknowledged that its documentation has not kept pace and affirmed its commitment to bringing the policies and procedures up to date. However, IRS did not believe that the level of current documentation has undercut the effectiveness of ACS. In response to our recommendation to establish, document, and implement objectives for the collection program and ACS, IRS said it will review its current objectives for both, which it identifies as the collection program priorities, to identify and implement any additional objectives. IRS also said it plans to define key terms such as "fairness" as it applies to collection activities in a data dictionary, which is communicated to IRS staff. As we noted above, ensuring clearly documented and communicated objectives exist will allow IRS to use the management infrastructure in ACS for both assessing risk and monitoring performance to their full potential. In addition, communicating a key concept like fairness to IRS staff will help reduce the risk that it is open to multiple interpretations. In response to our recommendation on establishing and implementing clear documentation, IRS said that it will review and implement clear guidance and documentation that can be communicated to IRS staff. As we discussed, having adequate documentation for the ACS case prioritization and selection process will assist IRS in communicating the process to staff, and in making decisions about the best way to consolidate ACS under the recent realignment of collection operations. In response to our recommendation to periodically evaluate the ACS case prioritization and selection process, and follow up on prior ad hoc evaluations, IRS said it will review and, if needed, update its internal management documents and ensure follow up for ad hoc evaluations. IRS also noted that any evaluation of the ACS case prioritization and selection process completed will be based on a risk assessment at that time. Given that components of the ACS case prioritization and selection process have been in place since at least 2000 without being evaluated, ACS may be missing opportunities to better prioritize its workload and improve collection results through periodic evaluations, and ensuring follow up for ad hoc evaluations. Lastly, IRS noted that our report did not identify any instances where the selection of a case was considered inappropriate or unfair. However, as described in our scope and methodology, we did not design our study to look for cases of inappropriate selection, but rather to assess the internal controls that help safeguard the fairness of the case selection process. By evaluating ACS’s internal control framework for selection, we were able to determine whether IRS had processes in place that help provide reasonable assurance of fair selection not just of cases selected in the past but also on an ongoing basis. We are sending copies of this report to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We will also send copies of the report to the Secretary of the Treasury, Commissioner of Internal Revenue, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions or wish to discuss the material in this report further, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Our objectives were to (1) describe the Automated Collection System (ACS) process to prioritize and select collection cases and the results of that process for fiscal year 2014, and (2) determine how well the ACS case prioritization and selection process supports the collection program mission and objectives. To describe ACS’s process to prioritize and select collection cases, we obtained and reviewed, to the extent they were available, IRS documents on how cases are prioritized and selected once they are received within ACS. The documents reviewed include sections of the Internal Revenue Manual (IRM), program documents, IRS reports, and presentations prepared by IRS staff. To better understand background and context for ACS, we reviewed information on the Inventory Delivery System process, which routes cases to ACS, and interviewed IRS officials responsible for the IRS collection program. We also twice visited an ACS call site in Philadelphia. We interviewed IRS managers in the offices of Headquarters Collection and Campus Collection, including IRS managers responsible for managing case inventory in ACS, on how collection cases are received in and flow through the ACS process. We also interviewed IRS officials regarding the various factors that the officials take into consideration in deciding how to work certain cases to meet ACS performance measures and how IRS’s recent realignment will affect ACS’s process. In addition, we observed ACS staff working cases and taking telephone calls from taxpayers. To better understand the scale of operations for ACS and performance measures, we reviewed data from a number of prepared reports and other data provided by IRS covering fiscal years 2012 through 2014. The data we received included the number of notification and enforcement actions taken in ACS. We derived most of the data from IRS Collection Activity Reports (CAR). We previously used CAR data to report on the IRS notice phase process in 2009. At that time, we interviewed IRS officials with knowledge of CAR data about the steps taken to ensure data accuracy. We determined that the CAR data were sufficiently reliable for our purposes. To describe the results of the ACS case prioritization and selection process for fiscal year 2014 (the most recent full year available), we obtained data from ACS on collection taxpayer cases that had been closed in fiscal year 2014 and partially from fiscal year 2015. We analyzed and reported closed ACS collection cases by the type of taxpayer (individual or business), case priority, type of delinquency, type of closure, and the average and median number of days cases were open in ACS. For purposes of our analysis of taxpayer type, individual taxpayers reflect cases handled by ACS call sites in the Wage & Investment division. We defined business taxpayers as those cases that were prioritized and worked within the Small Business and Self Employed division. These include the following types of taxpayers: Individual taxpayers who report business income, such as (1) nonfarm sole proprietorships that file Form 1040, Schedule C, are unincorporated and owned by a single individual in which net business income or loss is included in the owner’s individual adjusted gross income; (2) landlords, who file a Form 1040 and Schedule E- Part I and are individuals who report rental real estate activity on Part I of Schedule E; or (3) farmers, who file a Form 1040 and Schedule F or Form 4835 and are individuals who report farm income or landowners who report farm rental income; Businesses which failed to file or remit fully their employment taxes; or Business entity taxpayers, such as corporations. For the purposes of our analysis, we treated taxpayers who had both a balance due and nonfiler account on their case, known as a combination case, as a balance due case, consistent with how IRS reports ACS data. For the purposes of this review, we determined that the ACS data used in our analysis were reliable. Our data reliability assessment included reviewing relevant documentation, interviewing knowledgeable IRS officials, and reviewing the data to identify obvious errors or outliers. To assess how well the processes for case selection support collection program objectives and mission, we compared documentation for the processes identified above to selected standards in the Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999). We also reviewed IRS and collection program guidance in which objectives could potentially be stated or implied, including the IRM, the mission statements of various collection program subunits and policy statements on collections, annual collection program letters for fiscal years 2013 through 2015, and potentially collection program-related objectives in IRS Publication 3744, Internal Revenue Service Strategic Plan FY2014-2017. We then assessed whether ACS’s procedures, IRM sections, IRS reports, and related internal controls conformed to the relevant standards for internal control in the federal government. To determine which internal control standards were most relevant, we utilized our Internal Control Management and Evaluation Tool, in conjunction with observations based on our preliminary audit work, to select the standards that most closely related to ACS activities. We then focused our assessment of ACS internal controls around our selected standards by interviewing IRS officials and reviewing available documentation. To determine IRS’s definition of fairness as it applies to collection activities, we reviewed the ACS procedures and process for case prioritization and selection. Furthermore, we surveyed relevant industry and institutional sources, and determined that there is no standard definition of fairness in the context of tax collection specifically— or even tax administration more generally—to which IRS could appeal in lieu of having its own internally-generated definition of fairness within the collection program. To determine whether there are procedures in place to monitor, evaluate, and review the ACS prioritization process periodically, we reviewed similar documentation mentioned above. We also interviewed relevant IRS officials concerning their understanding of the mission, objectives, and internal controls of the collection program and ACS, and about the extent to which procedures exist to monitor ACS case prioritization and selection. We conducted this performance audit from October 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Once a case arrives, ACS assigns it to an inventory and priority level, as well as to a function unit. Depending on the function unit, ACS may take a number of actions, such as searching for contact information and a levy source, contacting taxpayers, or issuing levies or liens as appropriate. See figure 6, which depicts the ACS process. Also, the ACS process for setting the priority level for individual taxpayers is similar to the process used for business taxpayers. The main difference is that, within the Wage & Investment (W&I) process, high- income nonfiler and Federal Employee/Retiree Delinquency Initiative cases, among others, are prioritized first, as, unlike those of the Small Business/Self Employed (SB/SE) process, the W&I call sites do not have responsibility to pursue trust fund cases. The W&I prioritization process proceeds similarly to the SB/SE process thereafter. See figure 7. Finally, after segregating cases by inventory and function unit, ACS works cases in a specific order, beginning with priority level, then by follow-up date, and finally by taxpayer identification number in ascending order. See figure 8 for how ACS sorts cases by priority and inventory within a function unit. In November 2014, IRS realigned compliance operations across its W&I and SB/SE business operating divisions. Prior to the realignment, ACS operations were split with W&I handling individual taxpayers and SB/SE handling business taxpayers. As part of the realignment, IRS consolidated all ACS collection operations within SB/SE under the authority of a single IRS collection director. Figure 9 shows the various ACS call and support sites, which receive taxpayer calls and research collection cases. Through the end of fiscal year 2015, the eight former W&I call sites will continue to handle and answer phone calls from individual taxpayers, while the seven former SB/SE call sites will do the same for business taxpayers. Table 3 shows how notifications and enforcement actions have fallen significantly between fiscal years 2012 and 2014. This appendix shows the results of nonfiler cases closed in ACS in fiscal year 2014. This appendix shows the results of balance due and nonfiler cases closed in ACS from October 1, 2014, to February 28, 2015. In addition to the contact named above, MaryLynn Sergent, Assistant Director, Vaughn Baltzly, Jehan Chase, David Dornisch, Steven Flint, Robert Gebhart, Ted Hu, Robert Robinson, Alan Rozzi, John Sawyer, Albert Sim, and Jason Vassilicos contributed to this report. | IRS's ACS is one of the primary means for pursuing taxpayers who failed to fully pay their taxes or file their tax return in a timely manner. From fiscal years 2012 through 2014, ACS staff has declined 20 percent while the number of unresolved collection cases at year-end has increased 21 percent. Given these trends, IRS must make informed decisions about the collection cases it pursues to ensure the program is meeting its objectives and mission. GAO was asked to review the ACS process for prioritizing and selecting collection cases. This report (1) describes the ACS process to prioritize and select collection cases and the results of that process for fiscal year 2014, and (2) determines how well the ACS case prioritization and selection process supports the collection program mission and objectives. GAO reviewed IRS guidance, processes, and controls for prioritizing and selecting collection cases, reviewed ACS data, assessed whether IRS's controls followed Standards for Internal Control in the Federal Government , and interviewed IRS officials. The Internal Revenue Service's (IRS) Automated Collection System (ACS) has a multistep, automated process to prioritize and select cases of unpaid taxes and unfiled tax returns to pursue. ACS assesses cases to determine the order to work cases based on IRS's collection program priorities, the likelihood the case will be resolved, and the type of tax and amount owed. ACS also reviews cases to determine what action to take based on whether a levy source or contact information is known for taxpayers. ACS will then contact taxpayers according to its assigned priority and may issue a levy or lien against the taxpayer. ACS managers balance cases worked to ensure ACS achieves its case closure and taxpayer service measures. These decisions include how many notification and enforcement actions to take and how many cases to assign to IRS staff so that cases are worked in a timely manner. About half of the cases closed in ACS in fiscal year 2014 were high priority, including such issues as employers not paying federal employment taxes. Of the 3.5 million cases closed or transferred out of ACS in fiscal year 2014, IRS collected almost $6.2 billion. IRS generally had more success in collecting from individual taxpayers than from business taxpayers. However, because IRS has not identified objectives for the collection program and ACS, it is difficult to assess the program's overall effectiveness. ACS has processes for managing risk and reviewing performance, but has not implemented other key internal controls. This increases the risk that the collection program's mission of fair and equitable application of the tax laws will not be achieved. GAO identified deficiencies in the following internal control areas. Collection program and ACS objectives, and key term of fairness are not defined: IRS officials responsible for the collection program and ACS were unable to produce documentation of collection program or ACS objectives. Although fairness is specified in the collection mission statement, IRS has not defined or operationalized it in any ACS or collection program documents. In the absence of clearly documented objectives and a clearly communicated definition of fairness, IRS cannot know how well ACS contributes to the collection program mission and ensure the case prioritization and selection process is fair. The lack of clearly articulated objectives undercuts the effectiveness of IRS efforts to assess risks and monitor ACS performance. ACS case prioritization and selection process is not documented: IRS has little formal documentation that describes the ACS prioritization and selection process. Without adequate documentation, it is difficult for IRS to determine whether the ACS case prioritization and selection process effectively supports the collection program mission. Effectiveness of ACS process is not periodically evaluated: IRS has no procedures for periodically evaluating the ACS case prioritization and selection process and has not acted on implementing recommendations from a recent ad hoc study. Given that key components of the ACS process have remained relatively unchanged since its creation, IRS may be missing opportunities to better prioritize its workload, which could improve collection results. GAO recommends that IRS take four actions to help ensure the collection program meets its mission, such as establishing, documenting, and implementing objectives for the collection program and ACS, and establishing, documenting, and implementing procedures to complete periodic evaluations of the ACS case prioritization and selection process. In commenting on a draft of this report, IRS said it generally agreed with all of GAO's recommendations. |
State trading enterprises (STE) have existed for some time and have been considered legitimate trading entities by the General Agreement on Tariffs and Trade (GATT) since 1947. STEs have developed in various countries for various reasons at different points in history. The current Canadian Wheat Board (CWB) was formally established in 1935, after other cooperative-like organizations disbanded. It is one of the largest grain traders in the world, and the largest exporter of wheat and barley to the United States. STEs have been formed for various reasons. For example, the Australian Wheat Board (AWB) was created in 1939 to help Australian farmers manage difficulties in marketing wheat during wartime conditions, while Cyprus’ Carrot and Beetroot Marketing Board was established in 1966 because competition among producers was depressing the domestic and export prices of carrots and beetroot. Generally, a goal of export-oriented STEs is to maximize financial returns through the regulation of commodity sales from a particular country or region. The level of government involvement and overall size of the STEs’ operations vary widely; thus, it is difficult to generalize about STE operations or motivations on a global basis. We have already published a number of reports on state trading issues, including (1) a July 1995 report that provides a brief summary of trade remedy laws available to investigate and respond to activities of entities trading with the United States, including STEs; (2) an August 1995 report on the General Agreement on Tariffs and Trade/World Trade Organization (WTO) practices that apply to STEs and the effectiveness of those disciplines to date; and (3) a June 1996 report that focuses on the activities of three STEs, including the CWB, and their potential capabilities to distort trade in their respective commodity markets. In Canada, prairie provincial wheat pools were formed in 1924 but went into temporary receivership after the stock market crash of 1929. Following the financial hardship faced by farmers during the Depression, the Canadian government passed the Canadian Wheat Board Act of 1935, establishing the CWB. The CWB was also given control of marketing oats and barley, although oats were removed from the CWB’s purview in 1989. The CWB is currently managed by three commissioners, who are appointed by the government of Canada. A producer advisory committee, composed of 11 farmer-elected representatives from the prairie provinces, provides the CWB with advice on operational matters. The CWB employs over 500 people and has annual revenues of over $4.4 billion. Although Canada produced only 5 percent of the world’s wheat and 10 percent of the world’s barley in 1996, it held a 20-percent share of the world’s wheat export market and about 20 percent of the world’s barley export market in that year (see figs. 1.1 and 1.2.) In 1996, the United States ranked fourth in the world in both wheat and barley production, while Canada ranked fifth in wheat and third in barley production. Share of market (percent) (Preliminary) (Preliminary) The United States imports more red spring wheat, durum wheat, and barley from Canada than from any other country. These imports constitute a significant share of the U.S. market. In crop year 1996-97, about 14 percent of the durum wheat and 7 percent of the red spring wheat supply in the United States came from Canada. The United States imported this wheat in part because of problems with disease, adverse weather conditions, and a shortfall in domestic supply. Also, U.S. food use of durum has risen 125 percent over the past 2 decades; thus, as the demand for durum wheat has increased, so too have U.S. imports of this wheat from Canada. The vast majority of durum wheat, red spring wheat, and barley arrives in the United States from Canada by rail direct from Thunder Bay, Ontario, and the Canadian western prairie provinces of Manitoba, Saskatchewan, Alberta, and the Peace River district of British Columbia. In 1997, 70 percent of these Canadian grains were shipped by rail, 18 percent by vessel, and 13 percent by truck. For some export sales, the CWB relies on “accredited exporters” (AE), who are national and multinational companies authorized to purchase grain from the CWB for resale to customers. Some of the AEs are subsidiaries of U.S.-based multinational firms; some of the transactions that the AEs facilitate involve selling grain to other subsidiaries of the same company. Although the majority of the CWB’s sales are made directly to an end user, CWB officials told us that AEs facilitate all wheat sales to buyers in the United States. With the rise in U.S. imports of Canadian wheat beginning in the mid-1980s, U.S. wheat farmers became increasingly concerned about what they perceived as Canadian wheat export subsidies and unfair barriers to U.S. wheat exports. U.S. wheat farmers thought that Canadian transportation subsidies gave Canadian wheat farmers an unfair advantage in foreign markets. Canadian wheat and barley producers have historically received transportation subsidies that reduced shipping costs. U.S. wheat farmers’ market access concerns centered on Canadian import permits, or license requirements, which essentially prevented U.S. farmers from selling their grain to Canada without a permit. In addition, the U.S. government was concerned that the CWB might be selling its grain in an unfair manner. Specific provisions of the U.S.-Canada Free Trade Agreement (CFTA), effective January 1, 1989, dealt with several of these aspects of U.S.-Canadian grain trade. For example, under the CFTA, Canada agreed to eliminate Canadian transportation subsidies for agricultural goods originating in Canada and shipped via West Coast ports for consumption in the United States. CFTA called for ending Canadian import permits for grain pending changes in the comparative level of U.S. and Canadian support for producers. CFTA also dealt with the pricing of agricultural products, including wheat, providing that neither the United States nor Canada could export agricultural goods to the other at a price below the acquisition price of the goods plus any storage, handling, and other costs. Differences in U.S. and Canadian interpretations of this provision eventually led the United States to invoke CFTA dispute settlement procedures in May 1992. The subsequent 1993 CFTA dispute panel decision called for an audit of CWB pricing. An audit was conducted and its findings were reported in December 1993 (see ch. 4 for a discussion of the CFTA dispute panel decision). The Canada-United States Joint Commission on Grains, mandated by a 1994 U.S.-Canadian memorandum of understanding (MOU), released its final report in October 1995. Comprised of 10 nongovernment U.S. and Canadian officials with equal representation, it was formed to assist the two governments in reaching long-term solutions to existing problems in the grains sector. The report addressed policy coordination, cross-border trade, grain grading and regulatory issues, infrastructure, and domestic and export programs and institutions (see app. I for a chronology of U.S.-Canadian grain trade relations). While some areas of debate have been resolved, recent events have shown that difficulties remain in U.S-Canadian relations regarding the grains trade. Neither CFTA nor the use of trade remedies has resolved U.S. producer concerns about U.S. access to the Canadian wheat market, CWB practices, and increasing Canadian wheat imports into the United States. Despite CFTA’s gradual elimination of all duties between Canada and the United States by January 1, 1998, and the removal of Canadian import license requirements in 1995, some barriers continue to impede trade. Canada still requires an end-use certificate and subsidizes grain transportation through its ownership of railcars, and reciprocal access to grain handling and transportation systems in the two countries is yet to be achieved. In addition, the U.S. Trade Representative (USTR) remains concerned that the CWB may be using its monopoly to undercut U.S. wheat prices and that U.S. farmers continue to be hurt by increased Canadian wheat imports. Regarding market access, Canada removed its import license requirement in 1991 but still requires that U.S. wheat be accompanied by an end-use certificate to maintain Canada’s varietal controls and quality standards. The U.S. requirement that imports be accompanied by an end-use certificate is a direct response to Canada’s requirement and will remain in effect until Canada removes its end-use requirement. It also serves as a method to prevent imports from being used in U.S. foreign aid, export, and credit guarantee programs. The Canada-United States Joint Commission on Grains recommended that both countries remove these requirements. Another long-standing issue involves U.S. wheat exporters’ access to Canada’s primary grain elevator system. Canadian access to U.S. elevators, on the other hand, is relatively less impeded. In addition, Canada provides its wheat farmers with government railcars to transport their wheat. The Canada-United States Joint Commission on Grains recommended that both countries pursue the long-term goal of providing reciprocal access to each other’s grain infrastructure. In January 1998, the United States and Canada announced plans to implement a pilot program to facilitate U.S. wheat exports to Canada that would enable the United States to ship its grain directly to Canadian grain elevators. The United States is negotiating with Canada over Canada’s current requirement that U.S. grain be accompanied by a phytosanitary certificate—an assurance that the grain is disease free. The United States is also concerned about the costs of the pilot program and how it will be applied to imports. The United States continues to disagree with Canada’s interpretation of CFTA provisions defining the acquisition price of grains and the decision of the CFTA durum panel. In her May 1998 testimony before the Senate Agriculture Committee, the U.S. Trade Representative stated that there was a problem with CFTA in this regard and that the United States may try to revisit this issue in the upcoming 1999 WTO multilateral trade negotiations involving agriculture. In March 1998, the United States requested a new audit of the CWB’s grain pricing related to the acquisition price. Canada agreed to the new audit but disagreed with the United States on its terms. Canada wants to maintain the audit terms both countries agreed to after the 1993 CFTA dispute settlement panel decision. The United States wants to (1) deviate from the panel decision by applying a broader definition of “acquisition price”; (2) expand the audit to cover not only durum but also spring wheat and barley; and (3) include in the audit Canadian grain export prices to countries other than the United States. Wheat imports from Canada into the United States have risen since 1990, as shown in figure 1.3. Since the early 1990s, durum and red spring wheat imports have increased. Between 1990 and 1997, Canadian red spring wheat imports grew by more than 2,106 percent to 1,449,600 tons, while Canadian durum wheat imports have risen by 57 percent to 427,600 tons. In our 1996 report on STEs, we developed an economic framework to assess the capabilities of three STEs, given their relationships to domestic producers, governments, and foreign buyers. In this report, at the request of Senator Byron Dorgan, we look more closely at one of those three STEs—the CWB—and review a number of issues regarding its exports to the United States. We reviewed the following: (1) CWB operations, government assistance to the CWB and the Canadian farmer, and ongoing changes to the environment in which the CWB operates; (2) the availability of data to ascertain CWB pricing practices, and efforts to increase the amount of data available; and (3) the nature of trade remedies available to address the operations of STEs, and the frequency with which these remedies have been applied to STEs. In addition, we are providing information on the CWB’s role in commodities and futures markets, a summary of studies on the CWB’s effect on the Canadian farmer, and the applicability of U.S. antitrust laws to the CWB. To explore the operations of the CWB and the government assistance available to the CWB and Canadian farmers, and ongoing changes to the environment in which CWB operates, we reviewed background documents on the CWB and the North American grain trade provided by officials in the Canadian embassy in Washington, D.C., the CWB, the USDA’s Foreign Agricultural Service (FAS), the Department of Commerce, and USTR. We also traveled to Canada to gather documents and interview officials from various agencies within the Canadian government who are involved with the Canadian grain trade, including Agriculture Canada, the Department of Foreign Affairs and International Trade, the Canadian Grain Commission, Finance Canada, the Canadian International Grains Institute, and Transport Canada. In addition, we interviewed officials from the CWB, representatives from the Canadian railroad industry, and private sector representatives in the grain trade. To learn about the CWB from the U.S. government perspective, we interviewed officials from FAS and USTR. We also travelled to North Dakota to speak with state government officials, local farmers’ groups, private grain traders, and academics about the impact of Canadian grain imports on the U.S. grain industry. To learn about the availability of data on CWB pricing practices and determine the efforts to increase the amount of data available, we built upon the information we gathered by reviewing the data collected by the U.S. Customs Service, the U.S. Bureau of the Census, and USDA on Canadian wheat imports. Specifically, we obtained Customs’ detailed data base on all wheat and barley shipments from Canada entered from 1992 through 1996. From Census, we obtained aggregate import data for 1992 through 1997. We also obtained information collected and compiled by USDA on the end use of wheat imported from Canada for the most recent 3 marketing years. We then reviewed all of the data to determine what they revealed about Canadian wheat imports. We requested a variety of data from the CWB, including transactional data, but were denied access. We also interviewed officials from Customs, Census, and USDA about their import data. Our discussions included what procedures they use for collecting and compiling the data and for ensuring quality control, how the data are used by government agencies and private industry, and what the strengths and limitations of the data are. We also reviewed the agencies’ written procedures and regulations governing the collection and compilation of the data as well as internal evaluations of their data programs. In addition, we relied on previous GAO evaluations of the systems and processes for measuring U.S. trade with other countries. To identify efforts to increase the amount of data available we evaluated whether the WTO has made progress in increasing the amount of information available on the CWB and other STEs as well as WTO members’ compliance with STE reporting requirements. We reviewed the annual reports and minutes of formal meetings of the WTO Working Party on STEs, and WTO members’ STE reporting submissions for 1995-97, as well as USDA and USTR documents. We also interviewed WTO Secretariat officials, individual members of the Working Party on STEs, and USDA and USTR officials. In addition, we reviewed relevant documents, including the 1994 Understanding on the Interpretation of GATT article XVII, which deals with STEs. To identify the nature of trade remedies available to address the operations of STEs, we reviewed relevant U.S. statutes and documents published by the International Trade Commission (ITC), the Congressional Research Service, and GAO. We also reviewed the dispute settlement mechanisms within the CFTA, the 1994 North American Free Trade Agreement (NAFTA), and the WTO. We reviewed appropriate provisions of international trade agreements. To determine the frequency with which dispute settlement procedures have been used for matters involving the CWB and other STEs, we reviewed appropriate provisions of international trade agreements. We identified WTO member country STEs; the type of information available about STEs; and STEs’ compliance with the WTO’s article XVII, by reviewing article XVII STE notifications submitted to the WTO Secretariat from 1995 to 1997. We also reviewed past GAO work. We requested that USTR identify all disputes under international agreements that involved an STE that had been notified to the WTO. To determine the frequency with which U.S. trade remedy laws have been applied to matters involving the CWB and other STEs, we reviewed those laws and spoke with officials at USTR, the ITC, the Department of Commerce, USDA, the Department of Justice, and the NAFTA Secretariat. We obtained a list of STEs that provided notifications to the WTO Secretariat. We then asked USTR, the ITC, and Commerce to search their trade remedy enforcement records for any actions involving those STEs, going back to 1980. We asked the agencies to provide us with a description of the actions and their outcomes. All of the agencies responded to our request by identifying trade remedy actions that, according to their records, had involved STEs. The agencies stated, however, that it was difficult to determine conclusively whether the cases the agencies had identified represented the entire universe of such matters involving STEs. The agencies provided several reasons for this difficulty: (1) the voluminous amount of documentation on some types of cases coupled with an absence of electronic records to facilitate searches; (2) the possibility that an STE’s foreign name translation in a case file would differ from the translation on our STE list; (3) the fact that under U.S. law, STEs as institutions would not be the primary subject of a trade remedy action; and (4) the fact that the trade remedy action may not be country or foreign exporter specific; that is, it may involve imports of a particular product from all sources. We also examined Federal Register notices and reports issued by the agencies on their findings. To review CWB participation in U.S. futures and commodity markets/exchanges, we met with CWB officials and reviewed CWB documents concerning CWB objectives in participating in U.S. markets. We also discussed the participation of STEs in these markets with the U.S. Commodity Futures Trading Commission (CFTC) and officials representing the Minneapolis Grain Exchange (MGE), the Chicago Board of Trade (CBOT), and the Kansas City Board of Trade (KCBOT). To gather information on the CWB’s impact on the Canadian farmer, we reviewed Canadian and U.S. studies that measured the economic impact of the CWB and spoke with some of the authors of those studies. We also spoke with private grain traders and Canadian farm groups that represented both general farmers’ interests and specific commodity interests. To determine the applicability of U.S. antitrust law to the CWB, we interviewed officials at the Department of Justice’s Antitrust Division and the Federal Trade Commission and reviewed the Antitrust Enforcement Guidelines for International Operations, issued by those agencies in 1995. We also reviewed statutes and case law relevant to the extraterritorial applicability of U.S. antitrust law. We conducted our review from September 1997 to June 1998 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Agriculture, the U.S. Trade Representative, the Secretary of Commerce, the Commissioner of Customs, the U.S. Attorney General, and the Chairman of the ITC. We received technical comments from all six agencies, and incorporated them into the report where appropriate. USDA, USTR, and Commerce found the report to be accurate, fair, and balanced. We also discussed the factual content of the report as it related to the CWB and the Canadian government with embassy representatives from Canada and with representatives from the CWB. Canada and the CWB provided technical comments, which were incorporated into the report where appropriate. As an STE, the CWB has certain marketing characteristics and government support. These include a monopoly over most sales of Canadian wheat and barley and pricing flexibility through guaranteed supply and delayed payments to farmers. It also enjoys government guarantees of its financial operations and favorable interest rates on loans. Through other programs, the Canadian government provides additional subsidies to wheat and barley producers. However, the CWB faces changes in its structure and operations due to recently completed legislative reforms. These alter its corporate governance and relations to the government. Additionally, other changes in the Canadian grain marketing system are underway, including potential changes in rail regulation, U.S. investment in the industry, and the decline of import STEs in other countries. The CWB has the sole authority to market for export and for domestic human consumption wheat and barley grown in the western prairie provinces of Manitoba, Saskatchewan, Alberta, and the Peace River district of British Columbia. The CWB controls all exports of wheat and barley products through an export licensing process and directly markets most of its exports, including those made to import STEs. The operating costs of the CWB are deducted from payments made to producers. It enjoys pricing flexibility due to its assured supply of grain and ability to price discriminate. This assurance of supply is not absolute, however, as producers are free to plant non-CWB crops. In addition, the CFTA established that CWB sales into the United States could not fall below the acquisition price. The CWB reports that its monopoly status as a “single-desk seller” of western Canadian wheat allows it to extract more money from the world market on behalf of farmers than would be the case without this government-mandated status. This status allows the CWB to capture premiums through price differentiation, a practice in which the CWB sells grain at differing prices into different markets and to different customers. CWB-contracted economic studies have concluded that the single-desk status of the CWB gives it market power in the world wheat and barley trade and increases farmer revenue through price discrimination. One study found that for 1981-94, the CWB on average increased wheat revenues by $14.56 per ton, or $289 million per year when compared to a system of multiple sellers offering wheat in competition with each other.These premiums represent about 8 percent of CWB wheat revenues for those years. A second study found that the CWB increased barley producer revenues on average by $70.5 million per year (1986 to 1995) when compared to what they would have received in a system of multiple sellers. These premiums represent about 15 percent of the CWB barley revenues for those years. In contrast, a study financed by the provincial government of Alberta concludes that the CWB lacks market power and finds that the Canadian grain system is more costly than the comparable U.S. system. The study concluded that Japan is the only market where a single-desk premium may exist, and that, based on Japan’s share of CWB sales, the single-desk seller premium is small. As a single-desk seller, the CWB has market power and can price discriminate, according to officials at six grain companies located in the United States and in Canada. “Price discrimination” is the practice of charging a higher price to some buyers and a lower price to others in order to maximize profits. CWB contracts include a provision that stipulates that the grain is for shipment to and consumption in a specific country. Grain companies reported that this stipulation prevents the AEs from competing against each other on price. One grain company characterized this as being good for Canadian and foreign producers but bad for consumers. As a single-desk seller, the CWB may also choose to sell quantities to a certain market that differ from what would be supplied by the private trade. Several grain industry officials representing grain companies, grain consumers, and industry organizations reported that they believe the CWB withholds grain sales to the United States, with some citing the sales’ political sensitivity. An implication of withheld sales is that exports to the United States would increase in the absence of the CWB. Representatives of wheat farmers in Canada and the United States, as well as Canadian government officials, also believe that grain exports to the United States would be greater if the CWB ceased to exist. As the sole buyer of most Canadian wheat, the CWB has pricing flexibility and can deal in long-term contracts. The CWB has an assured supply of grain that it does not compete for and, according to a Canadian government official, acquires the grain from farmers at about 70-75 percent of the expected final return. This provides the CWB with a large margin within which to set prices and absorb any risk from changes in market conditions. The Canadian government confirms that, within its mandate to maximize returns to producers, the CWB has a certain latitude in pricing grain to customers. In contrast, private grain companies compete to acquire farmer-held stocks of grains and then compete to market it to buyers. Their profits and operations are funded from this margin between the two, and they encounter considerable risk from changes in market conditions. CWB officials note that the CWB engages in price discrimination to benefit from differing market conditions around the world. Certain customers may be willing to pay more for CWB grain than other customers, so the CWB could charge them a higher price. In turn, the CWB may be able to lower its price to certain importing countries without affecting its sales to premium customers. We were not able to analyze the conditions under which this occurred, since we did not have access to CWB transactions, which the CWB considers commercially sensitive. CWB pricing flexibility into the U.S. market was restricted as part of the CFTA, which went into effect in 1989. According to the agreement, sales into the United States could not be for less than the acquisition price plus the cost of transportation and handling of the grain. The United States and Canada disagreed on what constituted the Canadian acquisition price. A CFTA dispute resolution panel determined that the acquisition price is the initial payment the CWB gives farmers when they deliver their grain (see ch. 4). The initial payment is set by the government of Canada in consultation with the CWB. However, changes in how the government establishes the initial payment since the CFTA went into effect may have increased CWB flexibility in pricing grain headed into the United States. A USTR negotiator of the CFTA recalls that during the 1980s the initial payment was established close to 90 percent of the expected final payment to producers. During the early 1990s, the CWB initial payments were set at 80 percent while, according to an official of the government of Canada, the initial payment is now set at between 70 and 75 percent. We requested data on the expected final payment to producers from the government of Canada to confirm this trend, but the request was declined. According to a Canadian government official, the decisions on the amount of the initial payment are considered to be “Advice to Ministers” and thus confidential. A USTR official believes that the reduction in the initial payment gives Canada more latitude in lowering its prices to the U.S. market. This effectively lowers the pricing floor established by the CFTA. The government of Canada interprets these changes differently. According to government officials, while the events during the 1980s and early 1990s may well have influenced the subsequent judgments of those involved in making decisions concerning initial payment levels, to attribute any trend in government decision making in this area to the CFTA or the relationship to acquisition prices would be misleading. The Canadian government provides important financial assistance to the CWB. The government has covered CWB wheat and barley pool deficits on seven occasions over the course of its 63 year history. The government of Canada guarantees certain export credit sales of the CWB and compensates the CWB in case of losses or defaults. As a crown corporation, the CWB’s financing activities are guaranteed by the government. Thus the CWB is able to show net profits on its financing activities. The Canadian government offers direct financial support to the CWB under certain conditions. The Canadian government guarantees CWB initial payments and adjustments to initial payments as paid to farmers. In any year that sales revenue is insufficient to cover the initial payments, including any adjustments to the initial payment, the government pays the shortfall. Since the first CWB deficit in 1969, the government has provided monies in 6 other years for deficits in the wheat and barley pools. The total value of these transfers for losses in wheat and/or barley marketing operations is $1.3 billion. See table 2.1 for more details on these deficit payments. In addition to direct government support in cases of operational deficits, the government provides indirect support through guarantees of CWB borrowings. These guarantees allow the CWB to borrow in commercial markets at favorable interest rates. In parliamentary testimony, a CWB official estimated that in 1995, CWB borrowing costs were $30 million lower than if the CWB had borrowed at the rate faced by a large multinational grain company and $46 million lower when compared to the normal commercial business rate of borrowing. The effect of this difference varies over time. In order to update the estimated interest savings, we recomputed the savings based on data provided by the CWB on the interest difference between rates the CWB posts to investors on its commercial paper and market rates of commercial paper from highly rated, nongovernment-guaranteed issuers. As of December 1997, the annual interest savings on CWB borrowing were between $9.4 million and $14 million, substantially lower than the 1995 value. The CWB is in a position to profit from the interest rate differential between government borrowing costs and commercial rates on behalf of producers. In addition to enjoying reduced borrowing costs, the CWB can also earn interest on funds held following a grain sale but before making final payment to the producers. During the interim, the CWB can invest the funds at market rates and earn interest. The CWB does not distinguish in its public reporting between its earnings related to the indirect government support of below market rates of interest and its earnings from the reinvestment of sales revenues on behalf of producers. The latter earnings do not represent a benefit of the CWB since individual producers could also have invested the revenues at market rates if the CWB paid them at the time of sale. In 1997, the net interest from both of these sources was reported by the CWB to be about $61 million, nearly double the CWB’s $34 million in administration costs. In addition to the contributions of the government to the CWB’s operational borrowing, the government also provides guarantees for export credit sales. The government of Canada provides export credit guarantees for government buyers of CWB wheat through the Credit Grain Sales Program. As of March 1998, the CWB had outstanding loans to foreign countries of $4.7 billion. At the end of the 1997 marketing year, the CWB accounts receivable from foreign customers was $4.6 billion. Of those loans, only 3.6 percent were classified as current, with 83 percent rescheduled, 9.1 percent overdue, and 4.5 percent subject to a Paris Club rescheduling. In certain cases, the government of Canada negotiates debt relief agreements with nations as do other exporting countries under the Paris Club process. Where there has been a rescheduling, this occurred for reasons of national policy, including reasons related to humanitarian concerns. In cases where a country with outstanding CWB exposure under the Credit Grain Sales Program receives concessional (favorable) treatment, the government of Canada makes up the difference owed to the CWB by the debtor country. For the CWB, both the principal and the interest are guaranteed by the government. Over the last 6 years, the government has reimbursed the CWB $918 million for lost principal and interest under Paris Club debt relief. The Export Development Corporation, another Canadian crown corporation, provides export insurance and financing services for export sales; for example, providing insurance against the risk of nonpayment by a foreign bank in export transactions involving a letter of credit. The Export Development Corporation does not release information by commodity for reasons of commercial confidentiality; thus, we are unable to report on what share of its business involves CWB exports. Public reporting of government export finance subsidies is limited to the aggregate costs of the negotiated debt relief that is published in Canada’s Main Estimates. The structure of Canadian credit guarantees by commodity or by nation is not released by the government to the public or to GAO for reasons of commercial confidentiality. The government’s support of the CWB is supplemented by the fact that the CWB is not taxed on its activities. The CWB is exempt from tax on its income and capital because it is a crown corporation. The returns paid to the farmers are taxed as regular income. The Canadian government also provides other subsidies to the grain sector through income support policies, income and crop insurance, and provision of railroad hopper cars. While Canadian government support for wheat and barley is substantial, it has fallen significantly in the last several years. Many nations, including Canada and the United States, support their agricultural producers through direct and indirect assistance and subsidies. The government of Canada provides an annual estimate of this support for wheat and barley. The reported Canadian subsidies include costs associated with insurance programs for farmers; one-time payments that compensate agricultural landowners for the removal of the long-standing subsidy of railroad shipments of western agricultural products; other federal government expenditures on research and development, marketing, and promotion; and subsidies provided by provincial governments. In 1996, the combined subsidies for wheat and barley amounted to $922 million, or 19 percent of production valued at $4.8 billion. (See table 2.2 for the breakout by category.) This reflects a substantial reduction from 1990, when the combined subsidies for wheat and barley amounted to $3.2 billion, or 68 percent of production valued at $4.7 billion. Moreover, the payments to landowners for the end of Canada’s subsidy for the transport of western grain ended with the 1996 payments. Thus, according to a Canadian official, for 1997 Canada projects that the reported subsidies will be about half of the 1996 rate and will be 10 percent of the farmgate value of production. The subsidy data provided by Canada are, however, incomplete, as several government subsidies are not included. These consist of the previously discussed lower interest loans of the CWB and government reimbursements for losses on credit sales, as well as government support of the Canadian International Grains Institute. In addition to these excluded CWB subsidies, Canadian government data also exclude the value of government-provided hopper railcars that are supplied to transport prairie grains. The government of Canada acquired 13,120 hopper cars during the 1970s and early 1980s, with 12,780 now in service. According to a Transport Canada official, these cars are an indirect subsidy to western grain producers, because producers are not charged for their services. This subsidy applies to all prairie grains, including the CWB grains. In addition to the federal government-owned hopper cars, the government leases another 1,982 hopper cars, and two provincial governments contribute another 1,973 cars. In total, government-provided cars constitute about two-thirds of the 25,000 grain cars in Canada. The government of Canada does not have an estimate of the subsidy value of the government-owned and -leased cars that it provides to the western grain industry. We estimate that the government grain car fleet, if procured through private sector leases, would cost between $61 million and $68 million per year. This subsidy benefits all western grain producers, including barley and wheat producers. The proportion of this subsidy accruing to wheat and barley producers is about 64 percent, so the subsidy to wheat and barley producers is between $39 million and $44 million. Changes to the Canadian grain system are ongoing, and several events have the potential to alter the U.S.-Canadian grain trading environment. The Canadian government has recently enacted legislation that alters the operational structure of the CWB. Also, recent changes in government subsidies for the transportation system and proposed further deregulation may have an impact on grain flows to the United States. Meanwhile, increasing foreign investment and consolidation of the grain distribution and handling system in Canada, as well as privatization of grain import functions by many of the CWB’s customers, are changing the CWB’s operating environment. A law to amend the Canadian Wheat Board Act of 1935 was passed by the Canadian parliament on June 11, 1998. This law, known as Bill C-4,provides for a number of changes to the CWB in the areas of corporate governance and operational flexibility (see table 2.3), although it is too early to speculate about the legislation’s effects on Canadian farmers and the Canadian grain system as a whole. Bill C-4 was first put before the Canadian House of Commons on September 25, 1997, after more than 2 years of consultation with farmers and the Canadian grain industry. The bill builds on the principle of increasing direct producer input into the priorities and operations of the CWB while retaining the reporting mechanisms that allow the Canadian government to provide the CWB with financial guarantees and monopoly exporter status. A key provision in the new law replaces the CWB’s commissioner structure of management with a President and board of directors. Ten representatives on the 15-member board will be directly elected by producers; 5 board members, including the President, will be government appointees. Since the directors will not be elected until later this year, the new management structure of the CWB is unlikely to be in place in time to affect this year’s sales policy. Under Bill C-4, the board has numerous administrative powers, including the authority to designate its own chairperson; determine the salaries of the directors, chairperson, and President; and review the performance of the President. Furthermore, all directors will have full access to information about CWB operations, including audited financial statements; they will also be able to review the efficiency of the CWB with respect to grain sale prices, price premiums achieved, and operating costs. Bill C-4 also grants the CWB the ability to buy grain and reimburse producers for grain on more flexible terms. The CWB can now offer new payment options for farmers and enhance producers’ cash flows. For instance, the CWB will be able to close pool accounts before January 1 and thereby make final payments to producers before the beginning of the calendar year. These actions will be at the discretion of the new board. Canadian wheat and barley producers have historically received transportation subsidies that reduced shipping costs. The direct subsidies paid to the railroads under the 1983 Western Grain Transportation Act peaked at $925 million in 1986-87 and declined to $445 million in 1994-95, their last year. The subsidy, since the CFTA in 1989, did not apply to shipments to the United States from Canadian west coast ports, but to shipments traveling overseas and to the United States through Thunder Bay, Ontario. The cash subsidy ended in 1995 with its elimination due to internal budget constraints and Canada’s need to meet its obligations under the WTO agreements. With the removal of the subsidy in 1995, freight rates were capped until at least the year 2000. Owners of prairie land received a $1.2-billion payment paid out in 1995-96 to compensate them for the removal of the subsidy. The CWB expects the end of the grain transportation subsidies to make the United States a more attractive export market. According to a CWB official, the shipping costs of moving grain to Canadian coastal ports more than doubled when the subsidy ended, while costs to ship to the United States were left unchanged. The observed impact of the removal of the subsidies has been obscured by concurrent factors that have influenced export volumes and destinations. The CWB official in charge of U.S. marketing explained that at the same time the end of the rail subsidies made the United States a more attractive market, the decline of U.S. usage of the USDA’s Export Enhancement Program was working in an offsetting fashion, making the United States a less attractive market. According to a Canadian transportation official, it is too soon after the changes in the rail subsidy to see the effect, but Transport Canada anticipates the changes will lead to greater shipments to the United States or increased Canadian grain processing. During the same period in which the government rail subsidies ended, the CWB changed the way it computes the freight charges that it deducted from the payments it made to individual producers when they delivered their grain. This change in the pooling points for computing freight rate changes raised the shipping costs of grain for producers in the eastern prairies, making the United States a more attractive market. The impact of the subsidy elimination and of the CWB changes in freight charges is estimated to have a significant impact on the Canadian exports to the United States. The Producer Payment Panel, a Canadian government-appointed group representing industry, government, and academics, used an economic model of Canadian agriculture to estimate the impact. The panel estimated that the two changes would increase export shipments of wheat to the United States by 46 percent and barley by 44 percent. This assumed that commodity flows are allowed to respond to market signals and do not face U.S. border restrictions or diversion by the CWB. The government of Canada gave notice in 1997 that by 2002, the fleet of government-owned rail hopper cars would be privatized. The privatization of these hoppers is expected to change the attractiveness of shipment to the U.S. market relative to shipments to Canadian ports in a way reminiscent of the impact of ending the direct transportation subsidies. Currently, railcars are provided to the industry without cost for rail movements east, north, or west, but charges are levied if the cars are used for shipments to the United States. According to a Canadian rail manager, the application to rail shipments of full costs for the railcars will make it relatively cheaper to move grain to the United States than to port positions once privatization takes place. Recently, the Canadian government began a review of the grain handling and transportation system, with completion scheduled by the end of 1998. The grain review secretariat describes its scope as comprehensive, covering all handling and transportation actions between the farm bin and the loading of vessels for export. The objectives of the review are to ensure that the Canadian system meets expectations of customers; maximizes system efficiency, competitiveness, and capacity utilization; provides cost-effectiveness; promotes necessary investment; and establishes roles, responsibilities, and accountability for each system participant. A Montana State University study reviewed changes underway in the Canadian grain handling and transportation system and concluded that any reduction in freight costs due to system improvements is unlikely to fully offset the large increase in shipping costs due to the end of the direct subsidies and change in pooling costs. This suggests that changes in the transportation system will provide increased economic incentives for Canada to ship to the U.S. market. U.S. companies have been investing more heavily in the Canadian grain system, both in new infrastructure and the commercial operations of the system, in recent years. This shift in ownership reflects U.S. business’ interest in the Canadian grain system. For example, officials at ConAgra, Inc., told us that their company has invested in terminal operations in western Canada; and Archer-Daniels-Midland purchased a 43-percent share of a Canadian grain company, United Grain Growers. This change means the CWB interacts on more levels with U.S.-based companies within Canada. Historically, the CWB completed a lot of business on a state-to-state basis, especially with nonmarket economies, such as China; this trade involved working with other STEs. The decline of import-oriented STEs in other countries has changed the way the CWB does business with these countries, however. While some countries, such as China, still conduct business with the CWB through an STE, the majority of the CWB’s sales involves private companies. According to CWB officials, sales to other STEs now only comprise 10 to 15 percent of their entire business; at one time, this figure was as high as 35 percent. A Canadian farmers’ organization noted that private entities tend to prefer trading with other private entities as opposed to STEs; so the CWB increasingly uses its AEs to facilitate transactions with these private companies. As an STE, the CWB receives direct and indirect government support, and it has some flexibility in setting its export prices. Moreover, wheat and barley producers also enjoy other government subsidies. The CWB faces structural changes due to recently completed legislative reforms, and other changes in the Canadian grain marketing system are underway; however, it is unclear how these changes will affect the way that the CWB conducts its business. We received technical comments from USDA, USTR, ITC, the government of Canada, and the CWB on a draft of this chapter. We incorporated their suggestions where appropriate. The United States collects extensive information on imports of Canadian grain, including the value of each shipment entering the United States, but because this information lacks important details on such grain aspects as quality, it reveals little about the CWB’s prices. The CWB discloses only limited information about its prices for the wheat and barley that it sells to its trading partners. U.S. officials believe that the lack of transparency in the CWB’s prices may provide it with more flexibility than is found among private grain traders. The CWB states that it reveals as much about its prices as its competitors in the private sector. U.S. government officials and U.S. farmers believe that nontransparent CWB prices make it difficult to assess whether the CWB’s practices are consistent with its international obligations under trade agreements. Officials from USDA and Customs—agencies that gather import data—are discussing the possibility of collecting more details on Canadian grain prices; however, they acknowledge that much of the information necessary to determine pricing practices would be difficult for Customs to readily collect at the border. The United States is also working through the WTO to increase the amount of information STEs, such as the CWB, must report on pricing and other activities. Thus far, the United States’ and other countries’ efforts to expand STE reporting requirements on pricing have had limited success. For example, while the WTO has recently introduced a new format for STE reporting that requires more information on STE pricing practices, that format does not go as far as the United States would like in increasing the pricing transparency of STEs such as the CWB. The United States and other grain trading nations, and members of the U.S. grain industry, are concerned that without greater transparency, the CWB may be able to price its grain exports unfairly. The CWB makes public some aspects of its pricing methods, such as its use of prices on U.S. grain markets as the basis for sales into the United States. However, the CWB declines to reveal other important information, such as the actual contract prices for its sales to foreign grain buyers. The nontransparency makes it difficult for farmers to review CWB operations or to identify contract prices for individual sales. These contract prices could be useful in determining whether the CWB is engaging in pricing practices for which a trade remedy would be available, either through dispute settlement procedures under international agreements or through U.S. trade law. The CWB has recently made its contract records available to economists performing reviews of the CWB’s marketing performance under contracts with the CWB. The CWB and the government of Canada defend the lack of full price transparency by noting that the CWB behaves like other private sector grain companies that also do not reveal their sales prices. The CWB believes that it should not be held to higher pricing disclosure standards than its competitors in the private sector. The CWB reports that revealing transactional data violates its confidentiality agreement with its customers. Some aspects of CWB pricing are better known than others. The CWB allocates sales between Canada, the United States, the Caribbean and Latin America, Europe, the Middle East, the former Soviet Union, and the Asia-Pacific area based on expected returns, using different pricing strategies in different markets. According to CWB officials, prices for sales into the Canadian and U.S. markets are based on the trading values of grain on the MGE, adjusted for commercial freight. During hours when the MGE is closed, the price is based on the prior day’s closing price. The CWB sends its daily mill closing price to all mill customers, and it is published in several publications. According to the CWB, the objective of this strategy is to assure that Canadian domestic millers have grain prices that are consistent with what the wheat price would be in an open market. A Canadian grain company confirmed that sales to Canadian consumers are based on the MGE prices. The CWB’s domestic sales practices have created substantial domestic price transparency, although quantity information is only provided on an aggregate annual basis. The CWB established a program that provides daily price quotes based on the MGE’s futures and cash markets for Canadian farmers wishing to market directly to the United States. These prices are posted daily for CWB producers. Without access to CWB transactional data, we were not able to confirm that sales into the United States were at the published prices. Several U.S. companies we spoke with reported that the CWB appeared to use prices based on the MGE. CWB sales to other markets are less transparent, with the CWB reporting that sales reflect conditions of supply and demand in the various export markets. The CWB publishes daily export prices for its grains that are available at various port locations along the St. Lawrence Seaway and in-store at Vancouver, British Columbia. According to CWB officials, these “card” prices represent the prices paid by the top-paying customers for top grain and represent 10-12 percent of CWB sales volume. Of the 1.7 million-1.9 million tons sold at “card” prices, about 1.4 million tons were sold to Japan. In other cases where grain purchases are done through public tender, similar transparency is achieved. Remaining CWB sales are nontransparent. There are mixed views on CWB price transparency. Some U.S. government officials and U.S. farmers believe that nontransparent CWB prices make it difficult to assess whether the CWB’s practices are consistent with its international obligations under trade agreements. Representatives of the Western Canadian Wheat Growers Association reported that the lack of transparency hampers their ability to evaluate the performance of the CWB as their grain marketer. Moreover, critics of the CWB believe that it is erroneous to compare a government-sponsored monopoly with a private company. In parliamentary hearings, some farmers testified that they do not trust the CWB, which they believe lacks transparency and accountability, and advocate that the Auditor General of Canada be the auditor of the CWB. Officials at USDA emphasize that U.S. export prices are more readily available than those of Canada and other exporters. As an example, they cite an analysis of reported sales and export prices published by the International Grains Council that they prepared for the Canada-United States Joint Commission on Grains. They found that for a study period during the early 1990s, there were over 1,000 entries for the United States, 220 for the European Union, 71 for Canada, 44 for Argentina, and 11 for Australia on reported sales of grain and products. Several grain industry experts believe that differences in price transparency between the U.S. and Canadian systems may give the CWB strategic advantages when compared to private grain traders. Studies prepared for the Canada-United States Joint Commission on Grains highlighted differences in price transparency between the grain systems of the two countries. According to one analysis, private export and domestic grain pricing in the United States is done in a transparent manner, with government-subsidized export sales also open to public view, while in Canada the prices of export sales by the CWB are closely held, though the price of grain in the domestic Canadian market is transparent. This difference in transparency between the U.S. and Canadian pricing practices places U.S. firms at a strategic disadvantage when competing with the CWB. Another Commission analysis concluded that the single-desk seller, with more knowledge of pricing behavior by U.S. firms, can win more bids and can expect to earn a higher profit. In comparing the CWB with private firms, the CWB is further benefited by its monopoly sourcing requirement. Because it does not compete in procuring grain, it has the ability to undertake longer contracts and has a larger margin between the price at which it acquires its product and the price it asks for its grain. This advantage in sourcing gives the CWB considerable flexibility and latitude in pricing in comparison with private firms. The CWB, however, disputes this and believes that it may have a competitive disadvantage in grain procurement since private traders know the CWB acquisition cost and can use this information as a competitive advantage over the CWB. U.S. concerns that the CWB has an unfair pricing advantage have led to efforts to increase the transparency of CWB pricing practices. USDA and Customs officials are discussing the possibility of collecting more detailed price information on Canadian wheat imports. The data that Customs currently collects on Canadian grain, as well as on all other imports, lack the transactional detail necessary to be useful in determining whether the CWB engages in pricing practices for which a trade remedy may be available. For example, this information could be important to the United States in determining whether the CWB’s exports to the United States are priced below the acquisition price, and thus, not consistent with Canada’s obligations under CFTA and NAFTA. However, given the limited availability of detailed contract information, USDA officials acknowledge that expanded data collection at the Canadian border would be of only limited benefit in revealing CWB pricing practices, because much of the information necessary to determine these practices cannot be readily collected by Customs. The United States is also working through the WTO to increase the amount of pricing information the CWB and other STEs are required to submit in notifications to the WTO on their activities. So far, the United States has had limited success in achieving this objective. The WTO has changed its STE reporting format to include more information on pricing, but the United States does not believe that the changes are sufficient to determine if the CWB and other STEs are engaging in improper pricing. In general, the information on the value of imported merchandise collected by Customs on the entry forms submitted by importers is used to calculate duties as well as to compile U.S. trade statistics (see app. IV for a detailed discussion of processes for collecting and compiling import data). However, most imports from Canada, including wheat and barley, are duty free under NAFTA. Therefore, the value information collected by Customs on Canadian wheat and barley shipments is used only for statistical purposes. Census aggregates this value information to show the total value of wheat and barley entering the United States from Canada. Further aggregations allow Census to determine the U.S. trade balance with Canada as well as the overall U.S. trade balance. While the value information collected by Customs is used for calculating duties and compiling trade data, there are several reasons why this information lacks the detail that would be useful to determine whether the CWB is engaging in pricing practices for which a remedy may be available, either through use of dispute settlement procedures or through U.S. law. The entered value data currently collected by Customs do not provide specific detail on all of the elements affecting the price of a shipment, for example, protein content and payment terms. According to the Department of Commerce, additional information regarding imports may be useful in assessing whether foreign exporters are engaging in unfair pricing practices. However, Commerce officials state that the lack of additional information should not prevent a domestic industry from seeking relief under U.S. trade laws since petitions filed under those laws, in most cases, need not provide specific price information. Commerce states that while pricing information is an important element of a dumping petition, such information has been compiled from a variety of alternative sources by the petitioning industry. Moveover, Customs’ aggregate data have often been used by domestic industries when filing an antidumping petition. In addition, the value information collected by Customs is of only limited use in determining the competitiveness of the price of Canadian grain in the U.S. market. In order to make an accurate determination of the price of Canadian grain, detailed information about the nature of the grain is required. For example, wheat’s protein and moisture content as well the amount of foreign material it contains are important pricing determinants in the wheat market. The price of durum wheat, for instance, can vary significantly depending on its protein level. In the 1996-97 marketing year, the CWB paid Canadian farmers $198 for a metric ton of a particular grade of durum wheat at a 14-percent protein level as compared to $188 for a metric ton of the same grade of wheat at a 12.5-percent protein level. Customs’ import entry form does not require this level of detail. The form only requires that wheat be classified according to the Harmonized Tariff Schedule (HTS), which provides separate classifications for varieties of wheat such as durum and red spring. The HTS also divides some wheat varieties, such as red spring, into broad grade categories. Customs’ entry information, therefore, can only be used to estimate the border price per metric ton of varieties of Canadian wheat, such as durum. Without the detailed quality information, Canadian wheat import prices are not comparable to U.S. market prices. Customs’ entry information also does not reflect important information about the contract between the exporter and importer, which is necessary for determining the competitiveness of the price of Canadian grain entering the U.S. market. For example, the value of a shipment of durum wheat entering the United States from Canada in October could be based on a contract that was signed in February. The contract price is usually based on the current price of durum on the Minneapolis grain market. The market price of durum wheat can vary considerably from month to month. Therefore, a direct comparison of Canadian durum import prices and market prices on the date of importation is often inappropriate. Moreover, Customs’ entry form does not require information on payment terms, such as credit. Customs and USDA recently began discussions on increasing the amount of information Customs collects on wheat shipments. USDA has asked Customs to consider the feasibility of collecting more detail on wheat protein levels on Customs’ entry forms. According to Customs, some invoices for wheat presented to Customs at entry contain quality and protein information. Customs can also obtain such information by sending a “request for information” (form CF28) to the importer, although the information is not part of Customs’ automated reporting system, and, thus, not publicly available. USDA wants the HTS classifications for wheat to be expanded to take into account variations in protein levels. Customs told us that the HTS classifies durum and some other types of wheat merely by the name of the wheat, with no consideration of various quality levels being imported. Currently, the HTS only allows for reporting varying grades for red spring wheat. USDA hopes that expanding the HTS classification for wheat would allow it to better estimate the price of Canadian wheat entering the U.S. market. However, USDA acknowledges that even if such an expansion of the HTS occurs, estimates of Canadian wheat prices entering the United States would still be limited. USDA notes that these estimates would still lack important wheat pricing information such as moisture and foreign substance content, as well as contract details such as the prevailing market price at the time of the contract. In addition, USDA does not believe it is feasible for Customs to collect such detailed information on the automated import entry form. Customs officials expect opposition to such changes from some importers and Customs brokers who would consider the additional information requirements to be burdensome. The issue of the potential trade-distorting practices and lack of transparency of STEs, such as the CWB, is being discussed multilaterally through the WTO. The WTO will soon implement a new questionnaire for collecting information on STE activities, including their pricing practices. In negotiations on the format of the new questionnaire, the United States argued strongly for adding questions that would help bring greater transparency to the pricing activities of STEs and therefore be of use in determining whether STEs such as the CWB are engaging in improper pricing. U.S. officials believe that the new questionnaire does not go far enough in increasing the pricing transparency of the CWB and other STEs. The United States plans to continue pursuing the issue of STE transparency in the WTO. The WTO agreement provided for the creation of a Working Party on STEs tasked with ensuring and, in the long run, improving the transparency of STE activities. The agreement also established a formal STE definition. The Working Party has allowed the WTO to better track the activities of STEs but has had mixed success in increasing their transparency. One of the tasks of the Working Party has been to revise the WTO’s questionnaire on STEs. After over 2 years of negotiation, in April 1998 the Working Party reached agreement on a revised questionnaire that will now be used as the basis for WTO members’ STE notifications (for more information on WTO members’ recent STE notifications, see app. V.) While the new questionnaire requests more descriptive information about the functioning of WTO members’ STEs and additional quantitative information on STE import and export activities, it may not increase the transparency of the CWB. Obtaining detailed information on STE pricing activities has been a major U.S. objective for revising the questionnaire because U.S. officials think that such data may assist the United States in determining whether certain STEs use their special status to operate unfairly. To this end, the United States forwarded several proposals to the Working Party. One proposal requested that members be asked to provide transaction-level pricing data on an ad hoc basis; that is, at the request of another member. Another proposal asked that members provide, on a quarterly basis, information on average prices for STE imports and exports, broken down by country of origin or destination. However, while supported by some Working Party members, the United States faced significant opposition from other Working Party members, including Canada, to these proposals. Opposing WTO members were concerned about providing commercially confidential information and about the possible administrative burden this would impose on countries. Ultimately the Working Party could not agree that this more detailed pricing data should be provided; instead, the new questionnaire asks for information on STEs’ average annual prices for individual commodities. U.S. government and WTO officials believe that the new questionnaire represents an improvement over the 1960 questionnaire. According to USDA and USTR officials, the new version provides greater “organizational clarity” for WTO members to make their notifications. Specifically, the new questionnaire sets forth detailed guidelines and a structure for presenting descriptive and statistical information on members’ STE practices. In addition, the questionnaire now asks members to submit information on their STEs’ domestic pricing practices, something that was not previously required. The new questionnaire may be particularly useful in obtaining more information on the STE practices of prospective WTO members with market transition economies. In these countries, such as China or Russia, public information on their STEs is not always readily available. According to a WTO Secretariat official, the new questionnaire asks for information on STE practices in a much clearer way; the more precise language in the new questionnaire will “discourage one-line answers” and may result in less variation in the level of detail members provide on their STEs. Other Working Party members we spoke with echoed this view. According to USDA officials, whether the new questionnaire is able to bring greater transparency specifically to the CWB’s activities will largely depend on how Canada responds to the questionnaire. USDA officials told us that if Canada acts in the spirit of the new questionnaire, Canada could use its new notification to provide new information on the CWB’s activities. However, one USDA official also told us that the new questionnaire may provide virtually no new information about CWB pricing. This official told us that the CWB has provided much more information and transparency on domestic prices in recent years; Canada now publishes prices daily in various publications, and these data are essentially the same kind of price data that are available in the United States for U.S. grain prices. Therefore, according to this official, the new questionnaire does not necessarily provide for new information on Canada’s domestic grain prices. In addition, this official could not identify any sections of the new questionnaire that would definitely constitute new information that is not in the public domain with respect to the CWB. Canadian officials did not respond to our request to identify areas where the new questionnaire will require them to provide additional information on the CWB. In addition to the questionnaire, the Working Party has been developing an illustrative list of state trading activities. This list will be used in conjunction with the new questionnaire to help WTO members identify what entities in their trading regimes should be reported in the notification questionnaire. Working Party members have agreed to continue work defining possible further information needed to enhance the transparency of STEs. According to one WTO member we spoke with, it may be appropriate for the Working Party to review the adequacy of the new questionnaire and the list after a few years, because countries may initially “report the bare minimum” in the first trial run of the questionnaire. However, the working party has only met twice since the questionnaire was approved and, according to the WTO Secretariat, the Working Party has not begun discussing any future work program beyond continuing the review of notifications and finalizing the illustrative list. USDA and USTR officials told us they intend to pursue obtaining greater transparency of STEs through the Working Party, but they have not yet outlined their strategy for doing so. These officials stated that the United States is beginning to “think beyond transparency” about the need to develop disciplines on STEs in agriculture; they anticipate that STEs will be a significant focus in the WTO negotiations on agriculture set to begin at the end of 1999 or shortly thereafter. While the United States collects extensive Canadian grain import data, it has been unable to shed light on the CWB’s grain export prices. Although the existing grain import data are insufficient to determine CWB export prices, expanding the amount of data collected on Canadian wheat import shipments would be difficult and may not provide the desired pricing information. The United States has worked closely with the STE Working Party at the WTO but has had limited success in increasing STE pricing transparency in that forum, as well. In comments on our draft report, USDA emphasized that, in its view, the CWB, as the sole buyer of Canadian wheat for domestic human consumption and for export, is able to engage in trade-distorting actions. USDA also said that we did not sufficiently emphasize the CWB’s pricing flexibility that comes from its practice of making initial payments to its farmers of only 70-75 percent of the expected value of their grain. We believe that point was sufficiently established in the draft. However, we did not attempt to quantify the impact of CWB activities on grain trade. The CWB did not agree with GAO’s assertion that the CWB has flexibility in pricing compared to private firms because it does not compete to procure grain. Rather, the CWB believes that it has a competitive disadvantage in obtaining grain because private grain traders know the CWB’s acquisition cost. GAO included the CWB’s statement in its discussion of this issue. The CWB and other STEs are allowable under the WTO Agreements and NAFTA. They operate throughout the world in various forms and for various purposes. Trade remedies have not been fashioned specifically to deal with imports from STEs. However, products imported into the United States that are manufactured, produced, or marketed by STEs are subject to the same laws regulating imports as any other product, including laws that restrict imports or provide remedies to U.S. industry competing with unfairly traded goods. We asked the U.S. government entities charged with enforcing international trade agreements and U.S. trade laws, including USTR, Commerce, and the ITC, to search their records from 1980 to the present to determine how trade laws have been applied to STEs. We found a total of 15 trade remedy actions involving an STE, the most recent taken in 1995. Some of the trade remedy actions resulted in increased duties, while others prompted the country in which the import STE was operating to remove import restrictions that limited U.S. exports. Trade remedies under U.S. law allow government and private parties to seek redress for disruptive, trade-distorting, or unfair trade practices. The United States has various trade remedy laws at its disposal to deal with trade issues, dumping, actionable subsidies, or increased imports causing domestic injury. Dispute settlement provisions under international trade agreements, including the WTO and NAFTA, provide a means of seeking relief from measures or actions taken by other governments, which could include actions by STEs. The following is a brief summary of the relevant U.S. laws (for a more detailed description of each trade remedy, see app. VI). Title VII of the Tariff Act of 1930, as amended, provides the most common means of dealing with antidumping issues. Under title VII, private parties can petition the Department of Commerce and the ITC on behalf of a U.S. industry to determine whether a class or kind of merchandise is being sold in the United States at “dumped” prices and whether a U.S. industry is materially injured or threatened with material injury by reason of such dumped imports. If the agencies find that both dumping and injury or threat of injury exist, Commerce then calculates the amount of duties imposed on each importer to offset the price difference between the U.S. price and the normal value of the imported merchandise. Title VII of the Tariff Act of 1930, as amended, also provides for the imposition of countervailing (or equalizing) duties whenever a government or public entity provides certain subsidies for the manufacture, production, or export of articles subsequently imported into the United States and a U.S. industry is materially injured or threatened with material injury by reason of such subsidized imports. As in the case of antidumping law, petitions are filed with Commerce and the ITC, and countervailing duties are imposed if Commerce finds a subsidy and the ITC finds that a U.S. industry is materially injured or threatened with material injury by reason of imports of such subsidized imports. Under section 332 of the Tariff Act of 1930, as amended, the ITC has broad authority to investigate matters pertaining to U.S. customs laws, foreign competition with domestic industry, and international trade relations. Most ITC investigations under section 332 are conducted at the request of USTR or the House Committee on Ways and Means or the Senate Committee on Finance. Section 22 of the Agricultural Adjustment Act of 1933, as amended,authorizes the President to impose fees or quotas on imported products that undermine any USDA domestic commodity support or stabilization program. Since 1995, such actions may be applied only against imports from non-WTO countries. Sections 201 to 204 of the Trade Act of 1974, as amended, authorize the ITC to conduct investigations concerning whether an article is being imported into the United States in such increased quantities as to be a substantial cause of serious injury, or the threat thereof, to the domestic industry producing a like or directly competitive article. If the ITC makes an affirmative determination, it recommends a remedy to the President, who makes the final decision as to whether to impose a remedy and, if so, in what form and amount. Remedies generally take the form of increased tariffs and import quotas. Sections 301-309 of the Trade Act of 1974, as amended, commonly referred to as “Section 301,” give the President broad discretion to enforce U.S. trade rights granted by trade agreements and to attempt to eliminate acts, policies, or practices of a foreign government that violate a trade agreement or are unjustifiable, discriminatory, or unreasonable and burdensome or restrict U.S. commerce. The CWB and other STEs have been involved in investigations under U.S. trade law, and their activities have been the subject of formal disputes under international trade agreements. As shown in table 4.1, the CWB has been involved in three different trade remedy actions since 1980. The ITC conducted two investigations involving U.S. imports of Canadian grain, and USTR reported one CFTA dispute settlement action about CWB export pricing. As for other STEs, the Department of Commerce found that five STEs had been involved in antidumping or countervailing duty investigations or reviews. The ITC reported no actions involving STEs under sections 201 to 204. USTR identified three Section 301 investigations involving STEs, which led to three GATT dispute settlement procedures. USTR found one additional dispute settlement procedure involving an STE, not preceded by a Section 301 investigation. USTR found no WTO or NAFTA disputes involving STEs. All of the GATT dispute settlement cases involved restrictive practices of import STEs, whose actions impeded U.S. exporters’ access to foreign markets. The CWB was involved in three trade remedy actions, including two ITC investigations and one CFTA formal dispute. The ITC investigations both involved Canadian wheat imports into the United States—one looking at the competitiveness of the two markets, and the other examining the effect of those imports on U.S. farm sector support programs. The CFTA dispute involved the interpretation of provisions in the agreement on export prices of agricultural goods. The increase in imports of Canadian grain to the United States prompted two ITC investigations. In 1990, the ITC began a section 332 investigation on the conditions of competition between the U.S. and Canadian durum wheat industries. The ITC found that it was not apparent that prices paid by U.S. processors during 1986-89 for Canadian durum wheat were significantly different than prices paid for similar quality U.S. durum. At the President’s request, the ITC launched an investigation in January 1994 under section 22 of the Agricultural Adjustment Act to determine whether wheat, wheat flour, and semolina were being imported into the United States under such conditions and such quantities as to “render or tend to render ineffective, or materially interfere with, the price support, payment and production adjustment program conducted by. . .” USDA for wheat. Canada was the principal source of wheat imports into the United States, Canadian production and U.S. production being the two most important sources of supply. The ITC completed its section 22 review in July 1994. The six commissioners rendered a split decision: three commissioners found that wheat was not being imported under such conditions and in such quantities as to materially interfere with USDA wheat programs; three Commissioners found that wheat was being imported under such conditions and in such quantities as to materially interfere with USDA wheat programs. The commissioners had differing recommendations. Previous negotiations and the ITC investigation resulted in a 1-year MOU between the two countries (see app. I). In May 1992, the United States requested that a binational dispute panel under CFTA consider pricing policies for Canadian durum wheat exports. The United States believed that Canada was acting contrary to the CFTA requirement that neither country export agricultural goods to the other country at a price below the acquisition price. Among other things, the panel was asked to determine whether the acquisition price included solely the initial payments made to farmers by the CWB—the Canadian position—or all payments made to farmers with respect to a durum wheat crop (initial plus interim and final payments, if any)—the U.S. position. The United States was concerned that the Canadians’ more narrow definition would allow Canada to undercut U.S. grain prices and still meet the terms of CFTA. The panel’s final report supported Canada’s definition of the term but stated that it was not possible or desirable for the panel to determine whether the CWB had violated the CFTA provision. However, the panel recommended that a bilateral working group should be established for the general purpose of overseeing an audit of the CWB. An initial audit was conducted, which found that out of the 105 contracts or durum wheat sales to the United States CWB signed and completed from January 1, 1989, to July 31, 1992, 3 contracts were not in compliance with the CFTA prohibition against selling below acquisition price. By volume, the 3 contracts represented 13,985 metric tons in durum wheat sales, or 1.34 percent of the 1.04 million metric tons in total sales for the 105 contracts. STEs have been involved in trade remedy actions under U.S. law. These actions included products exported to the United States through STEs, and actions addressing STEs’ alleged unfair trade practices restricting U.S. exporters’ access to foreign markets. Commerce reported that it had conducted one antidumping investigation and four countervailing duty investigations involving STEs (see table 4.2). The 1990-91 antidumping investigation was prompted by a petition from the Ad Hoc Committee for Fair Trade of the California Kiwifruit Commission. Commerce found that fresh kiwifruit was being dumped at less than fair value by a New Zealand STE, the New Zealand Kiwifruit Marketing Board, through which all New Zealand kiwifruit for export must pass, except for such exports to Australia. The ITC then found that U.S. industry was injured/threatened by the imports. As a result, the United States imposed a dumping duty of 98.6 percent on those imports, effective November 1991. In two out of the four countervailing duty investigations, Commerce concluded that the STEs in the case, the New Zealand Meat Producers Board in one case and the Turkish Grain Board in the other case, were providing actionable subsidies. Commerce subsequently required Customs to levy a countervailing duty on imports from these two STEs. In the two remaining countervailing duty investigations, the petitioners terminated one, and the ITC found no injury on the other, and therefore no countervailing duties were levied. A Commerce official reiterated that Commerce does not make a determination regarding STE status and that STEs are not accorded special treatment or recognition under U.S. antidumping and countervailing duty law. However, Commerce said that to the extent that an STE sells goods into the United States that are dumped or unfairly subsidized, U.S. antidumping and countervailing duty laws could provide a potential remedy. The official cautioned that it was difficult to determine conclusively which STEs had been the subject of antidumping or countervailing duty investigations or reviews, due to the length and complexity of the cases, the possible involvement of an STE in a case in an indirect or insubstantial way, and potential difficulties in translating foreign STE names. USTR initiated a Section 301 investigation in June 1990 related to alleged discriminatory distribution and pricing practices of the provincial liquor boards of Ontario, including an STE, the Ontario Liquor Control Board. The USTR investigation was prompted by petitions filed by two U.S. brewing companies. These practices included listing requirements, discriminatory mark-ups, and restrictions upon distribution. After negotiations between Canada and the United States failed to resolve the issue, USTR requested that a GATT dispute settlement panel examine Canadian practices. In October 1991, the GATT panel reported that many of the Canadian practices were inconsistent with GATT prohibitions on quantitative restrictions and recommended they be removed. Because Canada did not discontinue the practices, in December of that year USTR determined that, consistent with the GATT panel finding, duties should be increased on beer and malt beverages from Canada. After resuming negotiations with Canada and again failing to reach agreement, the United States did increase duties by 50 percent ad valorem. Canada responded in kind by raising duties on beer imports from the two U.S. brewing companies that had originally submitted the Section 301 petitions. USTR initiated new negotiations with Canada in 1993, and the two countries ultimately signed an MOU in August of 1993. The MOU, among other things, increased U.S. brewers’ access to Canadian stores and reduced the Ontario Liquor Board’s fees for handling U.S. beer and removed the duties imposed earlier. The U.S. Cigarette Export Association filed a Section 301 petition in 1989 alleging that the Royal Thai Government and its state-controlled import monopoly and STE, the Thailand Tobacco Monopoly, engaged in taxing and licensing practices that effectively prohibited the importation and sale of cigarettes into Thailand. USTR initiated a Section 301 investigation and, on February 5, 1990, requested that a GATT panel be formed to consider the issue. The GATT panel issued a report in September 1990 concluding that Thai cigarette import restrictions violated GATT prohibitions on quantitative restrictions. The panel recommended that Thailand bring its practices into conformity with its obligations under GATT. In October 1990, the Thai government said that it would remove its import restrictions and, in response, USTR terminated the Section 301 investigation. The United States challenged quantitative import restrictions imposed by Korea’s STE, the Livestock Products Marketing Organization, arguing that they violated GATT prohibitions against quantitative restrictions. The United States also argued that the very existence of an import monopoly controlled by domestic producers constituted a prohibited import restriction. The GATT panel concluded that the existence of a producer-controlled monopoly was not a GATT violation but found that the Livestock Products Marketing Organization import restrictions did violate GATT prohibitions against import restrictions. In 1989, a GATT panel report was adopted recommending that the two countries consult and that Korea conform to GATT. Pursuant to the panel recommendation, the United States and Korea signed a bilateral agreement. Two subsequent agreements, one in 1993 and one under the Uruguay Round, were entered into force to achieve free market conditions for the importation and distribution of U.S. beef in Korea. Each year, both countries meet quarterly to ensure full implementation of the beef agreement provisions. In 1988, a GATT dispute settlement panel’s report was adopted on a GATT dispute case. The United States alleged that a variety of quantitative restrictions maintained by Japan on 11 agricultural categories were inconsistent with Japan’s GATT obligations. Some of these restrictions were imposed by an STE - the Livestock Industry Promotion Corporation. The Livestock Industry Promotion Corporation is an import monopoly that regulates imports of beef and certain dairy products, including condensed skim milk, whole milk powder, skimmed milk powder, whey powder, and buttermilk powder, into Japan. Japan acknowledged that the Livestock Industry Promotion Corporation maintained import restrictions but argued that the GATT prohibition on quantitative restrictions did not apply to state-trading monopolies. The dispute settlement panel concluded that GATT provisions, including those prohibiting quantitative restrictions, apply to all import restrictions, whether or not they are instituted through quotas or by STEs. The panel further ruled that while GATT permits measures such as those limiting private imports necessary to enforce the exclusive trading rights of import monopolies, it does not permit quantitative restrictions otherwise inconsistent with GATT obligations. On August 2, 1988, the United States and Japan signed an agreement to resolve the GATT dispute. Japan partially lifted its quotas and provided increased access as compensation. Japan eliminated quotas on 7 of the 11 product categories by April 1, 1990. While relatively few trade remedy actions have been taken involving STEs—15 since 1980— some of these actions have resulted in increased duties on imports found to be injurious to U.S. industry. In addition, the United States prevailed in all of the GATT dispute settlement cases involving STEs. In every case, the GATT dispute panel found that the import STE’s restrictions that limited U.S. exports had violated GATT rules. A wide range of trade remedies can be applied to STEs, such as those seeking redress for dumping, actionable subsidies, and sufficiently injurious imports, and those seeking relief, through dispute settlement, from actions taken by other governments as well as by STEs. We received technical comments from USDA, USTR, the U.S. Department of Commerce, the U.S. Department of Justice, and the ITC on a draft of this chapter. We incorporated their suggestions, where appropriate. | Pursuant to a congressional request, GAO provided information on Canadian grain exports to the United States, focusing on the operations of the Canadian Wheat Board (CWB) and the trade remedies applicable to the activities of state trading enterprises (STE). GAO noted that: (1) the CWB is a STE with a monopoly on certain Canadian grain sales and receives Canadian government subsidies in a number of direct and indirect ways; (2) the Canadian government also provides other assistance to its wheat and barley farmers; (3) the CWB's operating environment is undergoing changes, some of which are expected to make the United States a more attractive market for Canadian grain; (4) at the same time, there is a greater presence of U.S. grain companies operating in Canada, and the CWB is dealing more frequently with private companies in the sale of Canadian grain; (5) little information on actual CWB contracts is publicly available; (6) although U.S. Customs Service and the Department of Agriculture collect a great deal of information on imports of Canadian grain into the United States, these data cannot be used to ascertain CWB export prices; (7) the format that countries use to report on their STEs' activities to the World Trade Organization has recently been revised; (8) however, U.S. officials are concerned that it does not go far enough to increase the openness of the pricing practices of certain STEs, such as the CWB; (9) trade remedies to combat disruptive or trade-distorting imports under U.S. trade laws do not treat STEs any differently from other entities involved in international trade; (10) these U.S. trade laws can address trade issues such as dumping, actionable subsidies, and surges in imports; (11) in addition, STE activities may be subject to dispute settlement provisions under international trade agreements if the activities are inconsistent with an obligation agreed to by the government of the STE; and (12) relatively few trade remedy actions have been taken involving STEs. |
Challenges we identified with disaster resilience as long ago as 1980 have persisted and were reflected in our work on disaster mitigation in 2007, as well as recent studies such as a 2012 National Academies National Research Council (NRC) study on disaster resilience. We testified in January 1998 that, for a number of reasons, state and local governments may be reluctant to invest in resilience-building efforts. For example, leaders may be concerned that hazard mitigation activities will detract from economic development goals and may perceive that mitigation is costly and involves solutions that are overly technical and complex. In our work on hazard mitigation issued in August 2007, we found that these issues persisted. We reported that hazard mitigation goals and local economic interests often conflict, and the resulting tension can often have a profound effect on mitigation efforts. For example, we reported that community goals such as building housing and promoting economic development may be higher priorities than formulating mitigation regulations that may include restrictive development regulations and more stringent building codes. In particular, local government officials we contacted as part of that work commented that developers often want to increase growth in hazard-prone areas (e.g., along the coast or in floodplains) to support economic development. These areas are often desirable for residences and businesses, and such development increases local tax revenues but is generally in conflict with mitigation goals. In 2012, the National Academies National Research Council (NRC) issued a report on disaster resilience, noting that understanding, managing, and reducing disaster risks provide a foundation for building resilience to disasters. Risk management—both personal and collective—is important in the resilience context because the perceptions of and choices about risk shape how individuals, groups, and public- and private-sector organizations behave, how they respond during and after a disaster event, and how they plan for future disasters. However, the National Academies report described a variety of challenges that affect risk management. As with our 1998 and 2007 work, one of the key challenges the NRC reported for state and local governments was reluctance to limit economic development with resilience measures. We testified in January 1998 that individuals may also lack incentives to take resilience-building measures. We noted that increasing the awareness of the hazards associated with living in a certain area or previous experience with disasters do not necessarily persuade individuals to take preventive measures against future disasters. Residents of hazard-prone areas tend to treat the possibility of a disaster’s occurrence as sufficiently low to permit them to ignore the consequences. We have also reported that the availability of federal assistance may inhibit actions to mitigate disaster losses. As long ago as 1980, we reported that individuals may not act to protect themselves from the effects of severe weather if they believe the federal government will eventually help pay for their losses. The 1993 National Performance Review also found that the availability of post-disaster federal funds may reduce incentives for mitigation. Moreover, FEMA’s 1993 review of the National Earthquake Hazards Reduction Program concluded that at the state level there is “the expectation that federal disaster assistance will address the problem after the event. Concerns about individuals’ ability to appropriately evaluate risk and take action to protect themselves continued in our August 2007 work when we reported that individuals often have a misperception that natural hazard events will not occur in their community and are not interested in learning of the likelihood of an event occurring. Likewise, the 2012 NRC report on disaster resilience identified the key risk management challenge for homeowners and businesses in hazard-prone areas is the fact that they may be unaware of or underestimate the hazards that they face. In January 1998, we described three sets of issues that complicate assessing the cost-effectiveness of actions to build resilience. At the same time, we testified that a lack of comprehensive, reliable data to make decisions about cost-benefit tradeoffs may also inhibit local governments from deciding to invest in hazard mitigation activities. First, we noted that by definition, natural hazard mitigation reduces the loss of life and property below the levels that could be expected without mitigation, but it is impossible to measure what loss would have been incurred without mitigation. Second, the dispersion of mitigation funds and responsibilities across various agencies makes it difficult to determine the collective benefit of federal efforts. Finally, we noted that federal savings depend on the frequency of future disasters and the extent to which the federal government will bear the resulting losses, which is unknown. Moreover, in 2007 we reported that limited public awareness may also be a result of the complexity of the information that is needed for individuals to understand their hazard risks. We concluded that for local decision makers to develop mitigation strategies for their communities they need appropriate and easily understandable information about the probability of natural hazards and that efforts to improve public awareness and education are long-term and require sustained effort. Similarly, in our February 2014 testimony on limiting fiscal exposure from and increasing resilience to climate change, we noted that local decision makers need expert assistance translating climate change information into something that is locally relevant. The 2012 NRC study identified understanding how to share scientific information with broad audiences as one of the key challenges for resilience researchers. The challenges we identified in prior work—competing priorities for state and local governments, imperfect individual risk decision making, and imprecise, incomplete, and complex information about both risk and benefits—are difficult issues that are likely to persist. These issues are longstanding and difficult policy issues. Indeed, the increasing number of federal disaster declarations and the growing role of the federal government in funding post disaster relief and recovery efforts may serve to exacerbate some of the inherent challenges. We are encouraged that DHS finalized the National Mitigation Framework in 2013 to coordinate interagency and intergovernmental efforts and that the framework established a Mitigation Framework Leadership Group to coordinate mitigation efforts of relevant local, state, tribal, and federal organizations. The framework and the group create an avenue for interagency and intergovernmental leadership to pursue solutions to these difficult policy issues. As part of our ongoing work, we plan to evaluate the status of the Mitigation Framework Leadership Group and the actions taken to date to apply the National Mitigation Framework in the context of recovery from Hurricane Sandy. In ongoing work on federal resilience efforts in the aftermath of Hurricane Sandy, we identified three high-level actions that demonstrated an intensified federal focus on incorporating resilience-building into the recovery. In the wake of Hurricane Sandy, President Obama signed Executive Order 13632 on December 7, 2012. The Executive Order created the Hurricane Sandy Rebuilding Task Force, chaired by the HUD Secretary and consisting of more than 23 federal agencies and offices. Among other things, the executive order charged the task force to work with partners in the affected region to understand existing and future risks and vulnerabilities from extreme weather events; identify resources and authorities that strengthen community and regional resilience during recovery; and plan for the rebuilding of critical infrastructure in a manner that increases community and regional resilience. The order also charged the task force with helping to identify and remove obstacles to resilient rebuilding and promoting long-term sustainability of communities and ecosystems. In August 2013, the Sandy Rebuilding Task Force issued the Hurricane Sandy Rebuilding Strategy, which contained 69 recommendations to various federal agencies and their nonfederal partners aimed at improving recovery from both Hurricane Sandy and future disasters. Among these 69 recommendations are many that take into account the President’s charge to facilitate planning and actions to build resilience in the Sandy- affected region. Introducing the strategy, the task force chair acknowledged how critical it was that efforts to rebuild for the future make communities more resilient to emerging challenges such as rising sea levels, extreme heat, and more frequent and intense storms. The task force report notes that many of the recommendations have been adopted and describes actions underway to implement them as part of the Hurricane Sandy recovery effort. Key examples of long-term resilient rebuilding initiatives to address future risks to extreme weather events include the Rebuild by Design effort and the New York Rising Community Reconstruction Program. In June 2013, HUD and its partners launched the Rebuild by Design competition to challenge communities to develop solutions to address structural and environmental vulnerabilities exposed by Hurricane Sandy. Of the 148 applicants, HUD selected 10 to move forward. The selected teams then worked with local stakeholders to tailor their projects to the communities and hosted over 50 community workshops to educate the communities on their proposals and the theme of resilience. On April 3, 2014, the final proposals were exhibited and evaluated by an expert jury. Winning design solutions may be awarded disaster recovery grants from HUD and other public and private partners. Some resilience aspects of the designs include elevating streets and adding breakwater systems. The New York Rising Community Reconstruction Program is another mitigation program that provides over $650 million for additional rebuilding and revitalization planning and implementation assistance to Sandy-affected communities. As of May 2014, six regions of New York composed of 102 localities and 50 New York Rising communities created plans that assessed storm damage and current risk, identified community needs and opportunities, and developed recovery and resilient strategies. Each locality is eligible for $3 million to $25 million from HUD and other public and private partners. According to the State of New York, as of May 2014, multiple projects had been awarded funding. As part of our ongoing work on resilience-building as part of the Hurricane Sandy recovery, we are identifying recommendations from the task force report that particularly support resilient rebuilding and assessing the actions taken to date to implement them. We plan to issue a report on these issues later this year. In January 2013, Congress passed and the President signed the Disaster Relief Appropriations Act, 2013 (Sandy Supplemental), which appropriated about $50 billion in funding to support recovery. The Sandy Supplemental appropriated funds—primarily for programs and activities associated with recovery from Hurricane Sandy— to nineteen federal agencies. Among the nineteen agencies, four—DHS, HUD, the Department of Transportation (DOT), and U.S. Army Corps of Engineers (USACE)—received amounts that represent over 92 percent of the total with appropriations ranging from $5 billion to $15 billion. These four agencies administer five programs that play a key role in helping to promote resilience-building as part of recovery: (1) FEMA’s Hazard Mitigation Grant Program (HMGP), (2) FEMA’s Public Assistance Program (PA), (3) HUD’s Community Development Block Grant-Disaster Recovery (CDBG-DR) Program, (4) DOT’s Federal Transit Administration (FTA) Public Transportation Emergency Relief Program, and (5) USACE’s Flood Risk Management Program. See table 1 for a description of these programs and how they help to support resilience-building efforts. As part of our ongoing work we plan to focus on efforts within FEMA’s HMGP and PA and HUD’s CDBG-DR to facilitate and support community and regional resilience efforts as part of recovery from Hurricane Sandy. We are evaluating federal actions, gathering perspectives from key state officials, and studying at least one large-scale PA project that involves resilience-building activates. The Sandy Recovery Improvement Act of 2013 (SRIA) was enacted as part of the Sandy Supplemental. The law authorizes several significant changes to the way FEMA may deliver federal disaster assistance. FEMA is tracking its implementation of 17 provisions of the act, of which are aimed at mitigating future damage. Specifically: Public Assistance Work Alternative Procedures. This section authorizes FEMA to implement alternative procedures for administration of the PA program with the aim of providing greater flexibility and less administrative burden by basing grants on fixed estimates. Among the provisions in this section of SRIA is one that would allow use of all or part of the excess grant funds awarded for the repair, restoration, and replacement of damaged facilities for cost effective activities that mitigate the risk of future damage, hardship, or suffering from a major disaster. Changes to HMGP. SRIA authorized three key changes to HMGP. First, it authorizes FEMA to expedite implementation of the program. FEMA has issued guidance for streamlining the program and is planning actions to continue to refine the changes and measure their effectiveness. Second, SRIA allows FEMA to provide up to 25 percent of the estimated costs for eligible hazard mitigation measures to a state or tribal grantee before eligible costs are incurred. As part of the revised, streamlined HMGP guidance, FEMA has informed states of this provision. Third, SRIA allows FEMA to waive notice and comment rulemaking procedures for HMGP Administration by States and authorizes FEMA to carry out the program as a pilot. FEMA is currently carrying out a pilot program and issued a notice in the Federal Register in March 2014 seeking comments from the public to help inform the development of this new method of program delivery. To develop the program, FEMA is exploring the extent to which its determinations regarding cost-effectiveness, technical feasibility and engineering, and final eligibility and funding can be made at the state level. National Strategy to Reduce Costs on Future Disasters. SRIA required FEMA to make recommendations for the development of a national strategy to reduce costs on future disasters. In September 2013 FEMA issued the required report, recommending that the following elements be considered in the development of a national strategy: 1) engage in a whole community dialogue and build upon public-private partnerships, 2) enhance data-driven decisions, 3) align incentives promoting disaster cost reduction and resilience, 4) enable resilient recovery, and 5) support disaster risk reduction nationally. As we have previously reported, most responsibility and authority for resilience activities rests largely outside the federal government; therefore, nonfederal incentives are also a critical piece of the overall strategy to reduce future losses. The federal government, by providing incentives through programs like the five discussed earlier in this statement, can help to promote and facilitate mitigation before and after disasters. However, ultimately, nonfederal entities inside and outside the government make the decisions that lead (or do not lead) to resilience activities. Several examples of mitigation efforts at the state and local levels help illustrate the variety of ways that incentives help drive communities to be more resilient—with a range of activities from shoring up building codes to facilitating buyouts of repetitive loss properties. As part of our ongoing work, we are reviewing studies about efforts to build resilience to extreme weather events and climate change. For the purposes of this statement, we selected illustrative examples from those studies to describe a range of nonfederal efforts to incentivize mitigation. The 2012 NRC report discussed earlier in this statement included several examples of earthquake mitigation efforts in California. In California, zones of potential landslide, liquefaction, or fault rupture hazard have been mapped by the California Geological Survey as “special study zones” according to provisions in the California Alquist- Priolo Earthquake Fault Zoning Act of 1972. If a property is in one of these special study zones, the buyers must sign a form indicating that they have been made aware of this potential hazard and recognize that additional inspections and work may be required if they choose to modify the property in the future. The U.S. Resiliency Council, a nonprofit organization based in California, is working on creating building “report cards” to provide technically defensible metrics to evaluate and communicate the resilience of individual buildings. The initial focus is on seismic risk, and officials plan to extend their efforts to creating metrics for resilience to catastrophic wind and flood risk. Transparency and required disclosure of these individual building resilience ratings can benefit building users, owners, and lenders by increasing the value of well designed or properly retrofitted properties. The Property Transfer Tax Program in Berkeley, California has provided funds for seismically retrofitting a number of properties in the city. In 1992, voters approved an additional 0.5 percent transfer tax on top of the existing 1 percent tax on all real estate transactions, with the tax paid equally by buyer and seller. This portion of the transfer tax is available for voluntary seismic upgrades to residential property. Residential property owners have up to 1 year to complete the seismic retrofit (or lose the funds). Since many homes sell for $750,000 to $1 million or more in Berkeley, this amounted to $3,750- 5,000 in “free funds” and can cover homeowner upgrades such as brick chimney bracing or anchoring water heaters. This incentive program has an 80 to-90 percent participation rate. Along with other measures, this program has led to more than 60 percent of the residences in Berkeley becoming more resistant to earthquakes. Similarly, the Columbia Center for Climate Change Law of Columbia Law School issued a report in 2013 that included examples of flood mitigation efforts in North Dakota and Iowa. In 1996, 83 percent of the homes in Grand Forks, ND were damaged when the Red River reached 54 feet and topped the city dikes. Using CDBG funding, the City of Grand Forks purchased 802 lots, moved salvageable homes, and destroyed the remainder to create a green space. The city also partnered with a private development company to finance the construction of 180 new homes in an underdeveloped area of Grand Forks to help relocate some of the people who had lost their homes in the flooding and subsequent buy-out program. In 1993, the Iowa River flooded, and overtopped existing levees. The US Army Corps of Engineers planned to rebuild and repair the levees—but a working group of state and federal agencies determined that the best solution would be to buy all the homes in the levee district so that it could be statutorily dissolved and the city would no longer have to support the infrastructure in the area. The buyout program developed a novel land-transfer system and engaged government agencies and non-profit organizations to execute it. The non-profit organization’s role was instrumental because landowners were hesitant to sell their property to the government, but were comfortable selling it to the non-profit. The non-profit used a formula to set the land price, which contributed to the success of the buyout because purchasers didn’t have to negotiate prices with each individual landowner and it removed the incentive for landowners to hold out for a better price. Chairman Begich, Ranking Member Paul, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. If you or your staff members have any questions about this testimony, please contact me at (404) 679-1875 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Christopher Keisling, Assistant Director; and Katherine Davis, Dorian Dunbar, Melissa Duong, Kathryn Godfrey, Tracey King, Amanda Miller, and Linda Miller, made contributions to this testimony. In addition, Martha Chow, Steve Cohen, Stanley Czerwinski, Roshni Davé, Peter Del Toro, Chris Forys, Daniel Garcia-Diaz, Alfredo Gomez, Michael Hix, Karen Jarzynka-Hernandez, Jill Naamane, Brenda Rabinowitz, Joe Thompson, Lisa Van Arsdale, Pat Ward, David Wise, and Steve Westley also made contributions based on published and related work. Extreme Weather Events: Limiting Federal Fiscal Exposure and Increasing the Nation’s Resilience. GAO-14-364T. Washington, D.C.: February 12, 2014. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. Federal Disaster Assistance: Improved Criteria Needed to Assess a Jurisdiction’s Capability to Respond and Recover on Its Own. GAO-12-838. Washington, D.C.: September 12, 2012. Natural Hazard Mitigation: Various Mitigation Efforts Exist, but Federal Efforts Do Not Provide a Comprehensive Strategic Framework. GAO-07-403. Washington, D.C.: August 22, 2007. High Risk Series: GAO’s High-Risk Program. GAO-06-497T. Washington, D.C.: March 15, 2006. Disaster Assistance: Information on the Cost-Effectiveness of Hazard Mitigation Projects. GAO/T-RCED-99-106. Washington, D.C.: March 4, 1999. Disaster Assistance: Information on Federal Disaster Mitigation Efforts. GAO/T-RCED-98-67. Washington, D.C.: January 28, 1998. Disaster Assistance: Information on Expenditures and Proposals to Improve Effectiveness and Reduce Future Costs. GAO/T-RCED-95-140. Washington, D.C.: March 16, 1995. Federal Disaster Assistance: What Should the Policy Be? PAD-80-39. Washington, D.C.: June 16, 1980. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Multiple factors including increased disaster declarations, climate change effects, and insufficient premiums under the National Flood Insurance Program increase federal fiscal exposure to severe weather events. Managing fiscal exposure from climate change and the National Flood Insurance Program are both on GAO's High Risk list. GAO has previously reported that building resilience to protect against future damage is one strategy to help limit fiscal exposure. However, in prior reports GAO also identified multiple challenges to doing so. Responsibility for actions that enhance resilience rests largely outside the federal government, so nonfederal entities also play a key role. This testimony discusses (1) resilience-building challenges GAO has previously identified; (2) federal efforts to facilitate resilience-building as part of Hurricane Sandy recovery; and (3) examples of nonfederal efforts to incentivize resilience building. This testimony is based on previous GAO reports issued from 1998 through 2014 related to hazard mitigation, climate change, flood insurance, and preliminary observations from GAO's ongoing work for this committee on federal resilience efforts related to the Sandy recovery. For the ongoing work, GAO reviewed documents such as the Hurricane Sandy Rebuilding Strategy and a 2012 National Academies Study on building resilience. GAO also interviewed officials from FEMA and the Department of Housing and Urban Development (HUD). GAO has identified various challenges to resilience building—actions to help prepare and plan for, absorb, recover from, and more successfully adapt to adapt to adverse events including those caused by extreme weather. These include challenges for communities in balancing hazard mitigation investments with economic development goals, challenges for individuals in understanding and acting to limit their personal risk, and broad challenges with the clarity of information to inform risk decision making. GAO's work over more than 30 years demonstrates that these are longstanding policy issues, without easy solutions. The Department of Homeland Security's (DHS) May 2013 release of a National Mitigation Framework and establishment of a group to help coordinate interagency and intergovernmental mitigation efforts offers one avenue for leadership on these issues. In ongoing work on federal resilience efforts in the aftermath of Hurricane Sandy, GAO identified three high-level actions that demonstrated an intensified federal focus on incorporating resilience-building into the recovery. The President issued an executive order to coordinate the recovery effort and created a task force that issued 69 recommendations aimed at improving recovery from Sandy and future disasters—including recommendations designed to facilitate resilient rebuilding. Congress appropriated about $50 billion in supplemental funds for multiple recovery efforts, including at least five federal programs that help support resilience-building efforts. One of these, FEMA's Hazard Mitigation Grant Program (HMGP), is the only federal program designed specifically to promote mitigation against future losses in the wake of a disaster; while, another, the Public Transportation Emergency Relief Program made more than $4 billion available for transit resilience projects. The Sandy Recovery Improvement Act of 2013 provided additional responsibilities and authorities related to FEMA's mitigation and recovery efforts. In response, FEMA has undertaken efforts to make HMGP easier for states to use—for example by streamlining application procedures. The act also provided additional authorities for FEMA to fund hazard mitigation with other disaster relief funds and required FEMA to provide recommendations for a national strategy on reducing the cost of future disasters to Congress, which FEMA finalized in September 2013. For the purposes of this statement GAO reviewed studies that discuss resilience building and climate change adaptation and identified examples efforts at the state and local levels that illustrate a variety of nonfederal initiatives that may drive communities to build resilience. For example, a nonprofit group is creating report cards to assess the resilience of a building to earthquakes and plans to extend these efforts to wind and flood risk. In some localities public-private partnerships have helped promote efforts to buy properties that were at risk from repeat losses. |
IAEA safeguards are a set of technical measures and activities by which IAEA seeks to verify that nuclear material subject to safeguards is not diverted to nuclear weapons or other proscribed purposes. To carry out its safeguards activities, inspectors and analysts in IAEA’s Safeguards Department collaborate to verify that the quantities of nuclear material that non-nuclear weapon states have formally declared to the agency are correct and complete. All NPT non-nuclear weapon states are required to have a CSA that covers all of their civilian nuclear activities and serves as the basis for the agency’s safeguards activities. Iran’s CSA entered into force in May 1974. Most countries with a CSA have also brought into force an Additional Protocol to their CSAs. IAEA developed the Additional Protocol to provide additional information about countries’ nuclear and nuclear-related activities as part of its response to the 1991 discovery of a clandestine nuclear weapons program in Iraq. The Additional Protocol, when ratified or otherwise brought into force by a country, requires that country to provide IAEA with a broader range of information on the country’s nuclear and nuclear-related activities. It also gives the agency’s inspectors access to an expanded range of declared activities and locations, including buildings at nuclear sites, as well as locations where undeclared activities may be suspected. Undeclared nuclear material and activities are those a state has not declared and placed under safeguards but is required to do so pursuant to its CSA. In addition to its safeguards program, IAEA’s other programs include nuclear safety and security, nuclear energy, nuclear sciences, and technical cooperation. For example, IAEA’s technical cooperation program helps member states achieve their sustainable development priorities by furnishing them with relevant nuclear technologies and expertise. IAEA funds its programs primarily through (1) its regular budget, for which all member countries are assessed, and (2) voluntary extra-budgetary cash contributions from certain member countries and other donors to meet critical needs. In 2015, IAEA reported that its regular budget was $375.8 million, of which the nuclear verification program (i.e., safeguards) budget comprised $144.2 million. IAEA has a Board of Governors that provides overall policy direction and oversight for the agency. A Secretariat, headed by the Director General, is responsible for implementing the policies and programs of the IAEA General Conference and the Board of Governors. The U.S. Department of State coordinates the United States’ financial and policy relationship with IAEA. Under the JCPOA, IAEA verification of Iran’s implementation of its nuclear-related commitments was a condition to the lifting of specified U.S., European Union, and United Nations nuclear-related sanctions on Iran. These sanctions were lifted on the JCPOA’s “Implementation Day” (January 16, 2016), when IAEA verified and reported that Iran had fully implemented its commitments defined in Annex V, paragraph 15, of the JCPOA. In addition, the JCPOA provides for a “Transition Day,” when the United States and European Union will take further steps to eliminate nuclear-related sanctions on Iran, either on October 18, 2023, or before if IAEA reaches what it calls a “broader conclusion.” A broader conclusion refers to the agency’s determination that for a given year, a country has demonstrated that all declared nuclear material within its borders remained in peaceful activities and that there are no indications of diversion of declared nuclear material or of undeclared nuclear activities. manufacturing; and (3) conditions on uranium ore concentrate. Iran also agreed not to engage in spent fuel reprocessing, uranium or plutonium metallurgy, or activities that could contribute to the design and development of a nuclear explosive device. The duration of certain commitments is from 8 (for certain centrifuge restrictions) to 25 years (for monitoring of Iran’s uranium ore concentrate). Iran also agreed to fully implement the “Roadmap for Clarification of Past and Present Outstanding Issues” agreed to with IAEA. The Roadmap sets out a process for IAEA to address issues relating to the “possible military dimensions” (PMD) of Iran’s nuclear program. IAEA issued a report on the results of its PMD investigation in December 2015, and the Board of Governors subsequently issued a resolution closing its consideration of PMD. State officials noted that the Board, in its resolution, stated that it will be watching closely to verify that Iran fully implements its commitments under the JCPOA and will remain focused going forward on the full implementation of the JCPOA in order to ensure the exclusively peaceful nature of Iran’s nuclear program. According to officials in IAEA’s Office of the Legal Affairs, the agency will draw on existing authorities to verify Iran’s implementation of these commitments. For example, using its safeguards authorities, including the CSA, IAEA will verify implementation of most of Iran’s nuclear-related commitments largely through a range of traditional safeguards approaches and techniques that it has used in the past, such as inspecting nuclear facilities and conducting nuclear material accountancy to verify quantities of nuclear material declared to the agency and any changes in the quantities over time. For example, to verify non-diversion of nuclear material, IAEA inspectors count items (e.g., containers of uranium or plutonium), measure attributes of these items (e.g., isotopic composition), and compare their findings with records and declared amounts. Other IAEA safeguards activities include environmental sampling, remote monitoring, analysis of commercial satellite imagery, and analysis of open source documents. Under the JCPOA, IAEA also conducts certain activities agreed to by Iran, such as monitoring of Iran’s uranium mines and mills, according to IAEA officials. Such activities include containment and surveillance measures. Containment and surveillance measures include using video cameras to detect movement of nuclear material and tampering with agency equipment as well as seals that indicate whether the state has tampered with installed IAEA safeguards systems. Further, under the JCPOA, Iran agreed to provisionally apply, and seek ratification of the Additional Protocol, which gives the agency’s inspectors access to an expanded range of declared activities and locations, including buildings at nuclear sites, and locations where undeclared activities may be suspected. Under the JCPOA, Iran also agreed to fully implement “Modified Code 3.1” of the subsidiary arrangement to its CSA. According to IAEA, the text of the Modified Code 3.1 in Iran’s subsidiary arrangement is based on model language under which a country is generally required to provide preliminary design information for new nuclear facilities “as soon as the decision to construct, or to authorize construction, of such a facility has been taken, whichever is earlier.” In addition, Iran made commitments under the JCPOA to cooperate with IAEA and facilitate its safeguards activities. For example, Iran agreed to make arrangements to allow for the long-term presence of IAEA inspectors by issuing long-term visas, among other things. Iran also agreed to permit the use of modern technologies such as online enrichment monitors to increase the efficiency of monitoring activities. The JCPOA includes a mechanism in which its participants commit to resolve an access request from the agency within 24 days after the request is made. The JCPOA also describes a dispute resolution mechanism through which a participant in the agreement can bring a complaint if it feels that commitments are not being met and that allows the participant to cease performance of its commitments in certain cases if dispute resolution fails to resolve the participant’s concerns. Iran has also agreed to import enumerated nuclear-related and nuclear- related dual-use materials and equipment exclusively through a new “procurement channel” established under the JCPOA. The JCPOA details the establishment of a Joint Commission comprised of representatives of participants in the agreement, whose “procurement working group” will provide information to IAEA on these proposed imports. Under the JCPOA, IAEA may access the locations of intended use of such nuclear-related imports. IAEA officials told us that they expect the information provided through the procurement channel to support the agency’s efforts to detect undeclared activity. Our preliminary observations indicate that IAEA has estimated the financial, human, and technical resources necessary to verify Iran’s implementation of nuclear-related commitments in the JCPOA. IAEA has estimated that it needs approximately $10 million per year for 15 years in additional funding above its current safeguards budget to fund additional inspections, among other things. Of this amount, IAEA estimates that it will need about $3.3 million for costs associated with implementing the Additional Protocol, about $2.4 million for other inspector and direct staff costs, and about $4.4 million in other costs, such as travel, equipment, and support services beyond those associated with Additional Protocol implementation (see table 1). IAEA officials said that, pursuant to the Statute, the agency intends to propose to the Board of Governors that the approximately $5.7 million for all Additional Protocol activities and inspector costs attributable to the JCPOA be funded through IAEA’s regular budget after 2016. These officials said that the remaining $4.4 million in estimated funding needs for the following 15 years will remain unfunded in the regular budget and will therefore be supported through extra-budgetary funding. Under the Statute of the IAEA, IAEA is to apportion the costs of implementing safeguards, which would include inspector salaries and the cost of implementing the Additional Protocol, through assessments on member countries. As previously noted, such assessments form IAEA’s regular budget. The Statute also states that any voluntary contributions may be used as the Board of Governors, with the approval of the General Conference, may determine. The JCPOA was not finalized in time for the agency to include these costs for 2016 in its assessments. Consequently, according to a 2015 IAEA report, all of IAEA’s JCPOA work through 2016 will be funded through extra-budgetary contributions. According to IAEA officials, how quickly the $5.7 million in JCPOA costs are incorporated into the regular budget depends on member state support. These officials told us that IAEA hopes to resolve the questions about funding the JCPOA through the regular budget by the June 2016 Board of Governors meeting. IAEA’s annual $10 million funding estimate includes approximately $7.5 million in funding to cover estimated human resource costs associated with additional inspectors and support services under the JCPOA. IAEA officials told us that the agency plans to transfer 18 experienced inspectors and nearly twice that number of other staff to its Iran Task Force from other divisions within its Safeguards Department that cover countries and regions beyond Iran. According to IAEA officials, the other Safeguards divisions would backfill the vacancies created by the transfer of inspectors to the Iran Task Force by hiring and training new inspectors. In addition, according to IAEA officials, existing safeguards technical resources are sufficient to implement IAEA’s activities under the JCPOA. Our preliminary observations indicate that IAEA may face some potential challenges in monitoring and verifying Iran’s implementation of certain nuclear-related commitments in the JCPOA, according to current U.S. and IAEA officials as well as some former U.S. officials, several former IAEA officials, and many expert organizations we interviewed. These potential challenges include (1) the inherent challenge of detecting undeclared nuclear materials and activities, (2) potential access challenges to sites in Iran, and (3) safeguards resource management challenges. Our preliminary observations indicate that detection of undeclared nuclear materials and activities is an inherent challenge for IAEA particularly with regard to activities that do not involve nuclear material, such as some weapons development activities and centrifuge manufacturing, according to current U.S. officials, a former U.S. official, several former IAEA officials, and several expert organizations we interviewed. According to U.S. government officials, as well as a former U.S. official, detection of undeclared material and activities in Iran and worldwide is IAEA’s greatest challenge. Iran has previously failed to declare activity to IAEA. For example, according to IAEA documents, prior to 2003, Iran failed to provide IAEA information on a number of nuclear-fuel-cycle-related activities and nuclear material. In addition, according to IAEA documents and officials, Iran failed to notify the agency before 2009 that it had constructed the Fordow enrichment facility, as required under Modified Code 3.1 of the subsidiary arrangement to Iran’s CSA. To detect undeclared materials and activities, IAEA looks for indicators of such activities, including equipment, nuclear and non-nuclear material, infrastructure support, and traces in the environment, according to an IAEA document. However, some activities may not be visible through satellite imagery or do not involve nuclear material, and may not leave traces in the environment, such as some weapons development activities. According to a former U.S. government official, some former IAEA officials, and several expert organization interviews, this creates a challenge for IAEA in detecting undeclared activity. Furthermore, according to one expert organization we interviewed, the Board of Governors’ vote to close its consideration of the PMD issue without a complete accounting of Iran’s past nuclear program could reduce the indicators at IAEA’s disposal to detect potential undeclared activity. However, DOE officials noted that under the JCPOA, IAEA will have the authorities of the Additional Protocol and enhanced transparency measures of the JCPOA with which to investigate any indication of undeclared activities. In addition, IAEA officials told us that any uncertainties regarding the peaceful nature of Iran’s nuclear program that may arise during the course of the agency’s verification and monitoring under the JCPOA would have to be resolved for the agency to reach a broader conclusion that all nuclear material in Iran remains in peaceful activities. IAEA officials told us that the agency does not draw a broader conclusion lightly, for any state, and that it has traditionally taken 3 to 5 years for most member states. According to a former IAEA official as well as current IAEA and U.S. government officials we interviewed, IAEA has improved its capabilities in detecting undeclared activity. For example, according to U.S. government officials and national laboratory representatives, IAEA has adapted its inspector training program to focus on potential indicators of undeclared activity, beyond the agency’s traditional safeguards focus on nuclear materials accountancy. IAEA also has analytical tools at its disposal, some of which IAEA officials demonstrated to us, to detect undeclared activity worldwide. Furthermore, IAEA receives member-state support in detecting undeclared activity. For example, member states provided some of the information that formed the basis of IAEA’s PMD investigation. State officials agreed that the detection of undeclared nuclear material and activities in Iran, and all states, is a serious challenge for IAEA, but added that the JCPOA puts IAEA in a better position to detect such activities in Iran. The procurement channel established under the JCPOA may also serve as an additional source of indicators for IAEA on potential undeclared activities in Iran, according to current and two former U.S. government officials as well as representatives from two organizations we interviewed. IAEA officials told us that there is additional work to be done in informing exporting countries of their obligations and standardizing the data that the countries would report to IAEA so that they are usable to the agency. Officials noted that ensuring that countries report the data as required is particularly a challenge for countries that do not have a robust export control system. Our preliminary observations indicate that IAEA could face potential challenges in gaining access to Iranian sites, according to two former U.S. government officials, a former IAEA official, and one expert organization. IAEA’s safeguards activities in Iran, as in every state, depend on the cooperation of the member state, and those officials noted that Iran has a history of denying access to IAEA inspectors. For example, IAEA requested access in February 2012 to the Iranian military complex at Parchin—where high-explosive experiments were believed to have been conducted—and Iran did not allow access until the fall of 2015 as part of IAEA’s PMD investigation. One expert organization we interviewed said that Iran’s limited cooperation during the PMD investigation may have set a precedent for limiting IAEA access going forward. However, IAEA officials told us that the closure of the PMD investigation would not preclude future IAEA access requests to the sites that were part of the investigation, should IAEA determine that such access is warranted. These officials added that IAEA’s PMD investigation was conducted without the Additional Protocol and that any future investigations into potential undeclared activity would be conducted under the expanded legal authority of the Additional Protocol. According to IAEA officials we interviewed, Iran’s agreement to provisionally apply the Additional Protocol will facilitate the agency’s access to sites in Iran. Specifically, they told us that under the Additional Protocol, the agency can access any part of a site that it is inspecting within 2 hours’ notice and any other site within 24 hours. DOE officials noted that the JCPOA’s provisions for the reinstatement of sanctions will encourage Iranian cooperation with and access for IAEA. Additionally, State officials noted that refusal by Iran to comply with the access provisions of the Additional Protocol or JCPOA could lead to the reinstatement of sanctions. If Iran were to deny access, IAEA officials said that they could report the state’s noncompliance to the Board of Governors, though there is no deadline in the CSA or Additional Protocol that compels a state to cooperate, and according to a former IAEA official, the Board of Governors cannot impose a deadline for the state’s cooperation. However, as we noted earlier, the JCPOA includes a mechanism that limits the time for resolution of differences between the participants to 24 days for matters related to JCPOA implementation. According to some former U.S. government officials, the mechanism is an advantage for IAEA in that it imposes a time frame for Iran’s cooperation with access requests. However, a former IAEA official and one expert organization noted that the mechanism is untested, and that it is too soon to tell whether it will improve access. Our preliminary observations indicate that IAEA faces potential resource management challenges stemming from the monitoring and verification workload in Iran, including integrating the additional JCPOA-related funding needs that IAEA has identified into the agency’s regular budget and managing human resources within the safeguards program that could affect IAEA’s safeguards efforts internationally. State and NNSA officials told us that they are confident that IAEA would obtain any funding it would need in the form of extra-budgetary contributions from the United States and other member states to support its JCPOA activities. However, IAEA officials expressed concerns about the reliability of sustained extra-budgetary contributions for IAEA JCPOA activities due to possible donor fatigue in the long run, as IAEA will be conducting certain JCPOA verification activities for 10 or more years. IAEA and State officials, as well as a former IAEA official and one expert organization, also stated that funding the JCPOA from the IAEA regular budget would give the safeguards program a more stable and predictable funding base for its monitoring and verification activities. We have previously concluded that IAEA cannot necessarily assume that donors will continue to make extra-budgetary contributions at the same levels as in the past. However, our preliminary observations indicate that IAEA may face challenges in incorporating some of its JCPOA activities under its regular budget, which requires support from the General Conference. IAEA officials, as well as a former IAEA official, two former U.S. government officials, and one expert organization we interviewed stated that the proposal to move funding for monitoring and verification efforts under the JCPOA into the IAEA safeguards’ regular budget could face resistance from some member states without corresponding budget increases for other IAEA programs, such as the Technical Cooperation program, which supports nuclear power development and other civilian nuclear applications. State officials noted that delay or failure to incorporate costs into the regular budget would increase the reliance of IAEA on extra- budgetary contributions, but would not prevent IAEA from carrying out JCPOA-related activities as long as those contributions are forthcoming. These officials added that they recognize that long-term reliance on extra- budgetary contributions risks donor fatigue, and that they will plan for providing support with a view toward filling any future funding gaps that arise. Our preliminary observations indicate that IAEA faces a potential human resource management challenge in its safeguards program as it implements actions to monitor and verify the JCPOA, which could affect its broader international safeguards mission. Specifically, our preliminary observations indicate that IAEA’s strategy of transferring inspectors to its Iran Task Force from other safeguards divisions may pose a challenge to IAEA and its safeguards work in other countries because of the extensive time taken to hire and train new inspectors for those divisions. According to current IAEA and U.S. government officials, as well as two former IAEA officials and two expert organizations, hiring and training qualified inspectors can take years. A former IAEA official and current officials noted that inspector skills are highly specialized—typically requiring a combination of nuclear engineering knowledge with analytical abilities—making recruitment difficult. These officials also noted that IAEA’s hiring process is lengthy, requiring multiple interviews and examinations. Furthermore, current IAEA officials and two former IAEA officials, as well as one expert organization noted that training new inspectors to be proficient in executing their safeguards responsibilities can be a time-consuming process. As a result, IAEA faces a potential challenge as it prioritizes the JCPOA in meeting the need for additional experienced inspectors to work on Iran-related safeguards, while ensuring that other safeguards efforts in other countries are not understaffed. IAEA officials have said that its work in Iran is its priority. However, a former IAEA official, as well as some former U.S. government officials and several expert organizations told us that IAEA could mitigate human resources challenges in the short term through remote monitoring and the use of cost-free experts in its headquarters. We are not making any recommendations in this report. We provided the Departments of State and Energy and IAEA a draft of this report to for their review and comment. State, DOE, and IAEA provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees, the Secretaries of State and Energy, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix II. This report provides our preliminary observations on (1) the Joint Comprehensive Plan of Action (JCPOA) commitments that the International Atomic Energy Agency (IAEA) has been asked to verify and its authorities to do so, (2) the resources IAEA has identified as necessary to verify the JCPOA, and (3) potential challenges and mitigating actions, if any, IAEA and others have identified with regard to verifying the JCPOA. We will issue a separate report with the final results of our work later this year. To identify the nuclear-related commitments in the JCPOA that IAEA has been asked to verify and IAEA’s authorities for verifying these commitments, we analyzed the JCPOA, in close coordination with IAEA and the Department of State. We also analyzed IAEA documentation concerning the safeguards legal framework, including the Statute of the IAEA, which authorizes the Agency to apply safeguards, at the request of parties, to any bilateral or multilateral arrangement; “The Structure and Content of Agreements Between the Agency and States Required in Connection with the Treaty on the Non-Proliferation of Nuclear Weapons” (information circular (INFCIRC)/153), which provides the basis for the comprehensive safeguards agreement that most countries have concluded with IAEA and that covers all of the countries’ civilian nuclear activities; Iran’s Comprehensive Safeguards Agreement (INFCIRC/214); the Model Additional Protocol (INFCIRC/540), which provides the basis for an Additional Protocol that most countries with a CSA have concluded with IAEA to provide additional information about countries’ nuclear and nuclear-related activities; and the November 2011 IAEA Safeguards Report, which details items concerning “possible military dimensions” of Iran’s nuclear program; IAEA’s report on its investigation of the possible military dimensions; and the related Board of Governor’s resolution. We also analyzed the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) and United Nations Security Council Resolution 2231, which requests IAEA to undertake the necessary verification and monitoring of Iran’s commitments. To examine the resources IAEA has identified as necessary to verify the JCPOA, we reviewed IAEA planning and budget documents, such as “The Agency’s Programme and Budget 2016 –2017,” the Director General’s report titled “Verification and Monitoring in the Islamic Republic of Iran in light of United Nations Security Council Resolution 2231 (2015),” and pertinent Director General’s statements to the Board of Governors. In addition, to further understand IAEA authorities and resource needs, and to examine potential challenges and mitigating actions IAEA and others have identified with regard to verifying the JCPOA, we interviewed officials of IAEA, the Department of State, and the Department of Energy’s (DOE) National Nuclear Security Administration (NNSA); as well as representatives of Oak Ridge National Laboratory, Los Alamos National Laboratory, Sandia National Laboratories, and Brookhaven National Laboratory. We also held classified interviews with officials in the Office of the Director of National Intelligence and representatives of Lawrence Livermore National Laboratory. The information from these interviews is not reflected in this report. We also interviewed 8 former IAEA, and 10 former U.S. government and national laboratory officials, and representatives of 10 expert organizations—research institutions and nongovernmental organizations with knowledge in the areas of nuclear verification, monitoring, and safeguards. We selected these experts by first identifying organizations that had previously served as sources of IAEA subject matter experts for GAO. To ensure a wide range of viewpoints, we supplemented our initial selection with individuals and organizations identified through a literature search and by recommendations from our initial set of expert organizations. We requested interviews from all the identified experts and suggested contacts and interviewed all who agreed to participate (two experts provided written responses in lieu of in-person interviews). We analyzed their responses and grouped them into overall themes related to different elements of the objective. When referring to these categories of interviewees throughout the report, we use “some” to refer to three members of a group, “several” to refer to four or five members of a group, and “many” to refer to more than five members of a group. Our preliminary observations are based on our ongoing work, which is being conducted in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, William Hoehn (Assistant Director), Alisa Beyninson, Antoinette Capaccio, R. Scott Fletcher, Bridget Grimes, Joseph Kirschbaum, Grace Lui, Thomas Melito, Alison O’Neill, Sophia Payind, Timothy M. Persons, Steven Putansu, Vasiliki Theodoropoulos, and Pierre Toureille made key contributions to this report. | In July 2015, multilateral talks with Iran culminated in an agreement called the Joint Comprehensive Plan of Action (JCPOA), through which Iran committed to limits on its nuclear program in exchange for relief from sanctions put in place by the United States and other nations. The International Atomic Energy Agency (IAEA), an independent international organization that administers safeguards designed to detect and deter the diversion of nuclear material for non-peaceful purposes, was requested to monitor and verify Iran's adherence to these limits. The U.S. Department of State coordinates the United States' financial and policy relationship with IAEA. GAO was asked to review the authorities and resources IAEA has to carry out its activities regarding the JCPOA. On the basis of preliminary results of ongoing work that GAO is conducting, this report provides observations on (1) the JCPOA commitments that IAEA has been asked to verify and its authorities to do so, (2) the resources IAEA has identified as necessary to verify the JCPOA, and (3) potential challenges and mitigating actions IAEA and others have identified with regard to verifying the JCPOA. GAO analyzed the JCPOA and key IAEA documents and interviewed current and former IAEA officials, U.S. government officials, national laboratory representatives, and experts from research institutions. GAO is not making recommendations at this time and expects to issue a final report on this work later this year. As outlined in the JCPOA, IAEA was asked to verify Iran's implementation of a range of nuclear-related commitments, and IAEA uses its authorities and conducts additional verification activities to do so, according to IAEA. Iran's commitments include limits on uranium enrichment levels and enriched uranium inventories. GAO's preliminary observations indicate that IAEA plans to verify Iran's implementation of these commitments through a range of activities conducted by its Safeguards Department, such as inspecting Iran's nuclear facilities and analyzing environmental samples. To verify Iran's implementation of its commitments under the JCPOA, IAEA officials told GAO that the agency uses its authorities and conducts additional verification activities agreed to by Iran under the JCPOA, such as monitoring Iran's uranium mines and mills. In addition, under the JCPOA, Iran agreed to provisionally apply the Additional Protocol, an agreement that will expand IAEA's access, including to locations where undeclared materials and activities—those that an IAEA member state is required to, but has not declared under its agreements with IAEA—may be suspected. The JCPOA also includes a mechanism in which participants to the agreement commit to resolve an access request from the agency within 24 days after the request is made. GAO's preliminary observations indicate that IAEA has identified the resources necessary to verify the nuclear-related commitments in the JCPOA. IAEA has estimated that it needs approximately $10 million per year for 15 years in additional funding above its current safeguards budget for JCPOA verification. In addition, IAEA plans to transfer 18 experienced inspectors to its Iran Task Force from other safeguards divisions and to hire and train additional inspectors. According to IAEA officials, existing safeguards technical resources are sufficient to implement the JCPOA. According to IAEA documents, all of IAEA's JCPOA work through 2016 will be funded through extra-budgetary contributions. IAEA officials said that the agency intends to propose that of the $10 million, approximately $5.7 million for all Additional Protocol activities and inspector costs attributable to the JCPOA be funded through IAEA's regular budget after 2016. GAO's preliminary observations indicate that IAEA may face potential challenges in monitoring and verifying Iran's implementation of certain nuclear-related commitments in the JCPOA. According to current and former IAEA and U.S. officials and experts, these potential challenges include (1) integrating JCPOA-related funding into its regular budget and managing human resources in the safeguards program, (2) access challenges depending on Iran's cooperation and the untested JCPOA mechanism to resolve access requests, and (3) the inherent challenge of detecting undeclared nuclear materials and activities—such as potential weapons development activities that may not involve nuclear material. According to knowledgeable current and former U.S. government officials, detection of undeclared material and activities in Iran and worldwide is IAEA's greatest challenge. According to IAEA documents, Iran has previously failed to declare activity to IAEA. However, according to a former IAEA official as well as current IAEA and U.S. government officials GAO interviewed, IAEA has improved its capabilities in detecting undeclared activity, such as by adapting its inspector training program. |
The Federal Payment Reauthorization Act of 1994 requires that the mayor of the District of Columbia submit to Congress a statement of measurable and objective performance goals for the significant activities of the District government (i.e., the performance accountability plan). After the end of the each fiscal year, the District is to report on its performance (i.e., the performance accountability report). The District’s performance report is to include a statement of the actual level of performance achieved compared to each of the goals stated in the performance accountability plan for the year, the title of the District of Columbia management employee most directly responsible for the achievement of each goal and the title of the employee’s immediate supervisor or superior, and a statement of the status of any court orders applicable to the government of the District of Columbia during the year and the steps taken by the government to comply with such orders. The law also requires that GAO, in consultation with the director of the Office of Management and Budget, review and evaluate the District performance accountability report and submit it not later than April 15 to your committees. Our June 2001 report on the District’s fiscal year 2000 performance accountability report included recommendations that the District (1) settle on a set of results-oriented goals that are more consistently reflected in its performance planning, reporting, and accountability efforts, (2) provide specific information in its performance reports for each goal that changed, including a description of how, when, and why the change occurred, and (3) adhere to the statutory requirement that all significant activities of the District government be addressed in subsequent performance accountability reports. Our review had determined that the District’s fiscal year 2000 report was of limited usefulness because the District had introduced new plans, goals, and measures throughout the year, the goals and measures were in a state of flux due to these changes, and its report did not cover significant activities, such as the District’s public schools, an activity that accounted for more than 15 percent of the District’s budget. In response, the District concurred with our findings and acknowledged that additional work was needed to make the District’s performance management system serve the needs of its citizens and Congress. The comments stated that the District planned, for example, to consolidate its goals and expand the coverage of its fiscal year 2001 report to more fully comply with its mandated reporting requirements. We examined the progress the District has made in developing its performance accountability report and identified areas where improvements are needed. Specifically, the objectives of this report were to examine (1) the extent to which the District’s performance accountability report includes its significant activities, (2) how well the District reports progress toward a consistent set of goals and explains any changes in the goals, (3) the extent to which the report adheres to the statutory requirements, and (4) areas for future improvement. To meet these objectives, we reviewed and analyzed the information presented in the District’s fiscal year 2001 performance accountability report and interviewed key District officials. To examine the extent to which the District’s performance accountability report included significant activities, we compared the information in the 2001 performance and accountability report with budget information on actual expenditures presented in the District’s budget. To determine how well the District reported progress toward a consistent set of goals, we compared the report’s goals with those contained in the District’s fiscal year 2002 Proposed Budget and Financial Plan which served as the District’s 2001 performance plan and then reviewed any changes. To determine the extent to which the report adhered to the statutory requirements, we analyzed the information contained in the District’s report in conjunction with the requirements contained in the Federal Payment Reauthorization Act of 1994. We also reviewed the performance contracts for the District’s cabinet-level officials. To identify areas for future improvement, we compared the fiscal year 2001 report with the District’s fiscal year 2000 and 1999 performance accountability reports to identify baseline and trend information. We based our analysis on the information developed from work addressing our other objectives, recommendations from our June 8, 2001, report commenting on the District’s fiscal year 2000 report, and our other recent work related to performance management issues. We conducted our work from December 2001 through April 2002 at the Office of the Mayor of the District of Columbia, Washington, D.C., in accordance with generally accepted government auditing standards. In accordance with requirements contained in P.L. 103-373, we consulted with a representative of the director of the Office of Management and Budget concerning our review. We did not verify the accuracy or reliability of the performance data included in the District’s report, including information on the court orders in effect for fiscal year 2001. We provided a draft of this report to the mayor of the District of Columbia for review and comment. The deputy mayor/city administrator provided oral and written comments that are summarized at the end of this report, along with our response. The written comments are reprinted in their entirety in appendix III. The fiscal year 2001 performance accountability report includes most of the District’s significant activities, providing performance information for 66 District agencies that represent 83 percent of the District’s total expenditures of $5.9 billion during that year. The District included 26 additional agencies in this year’s report, compared with 40 in its prior report for fiscal year 2000. Appendix I lists the 66 agencies included in the District’s 2001 performance accountability report, along with the 2001 actual expenditures for each of these agencies. However, the absence of goals and measures related to educational activities remains the most significant gap. The District reports that it is continuing its efforts to include performance information on its significant activities in its performance accountability reports. For example, the 2001 performance accountability report notes that the District of Columbia Public Schools (DCPS) did not include performance goals or measures because they were in the early stages of a long-term strategic planning process initiated by the newly installed school board. DCPS accounted for about 14 percent of the District’s fiscal year 2001 actual expenditures, and public charter schools, which also were not included, accounted for another 2 percent of the District’s 2001 expenditures. The 2001 report states that in lieu of a formal performance accountability report for DCPS, the District included a copy of the Superintendent’s testimony before the Subcommittee on the District of Columbia, Committee on Government Reform, U.S. House of Representatives. The District acknowledged that the inclusion of this information does not fully comply with the statutory requirement and set forth a plan to include DCPS performance goals and measures in the fiscal year 2003 proposed budget and financial plan that will serve as the basis for the DCPS performance accountability report for fiscal year 2002. The 2001 report lists another 10 agencies that were not included, primarily, according to the report, because they did not publish performance goals and measures in the fiscal year 2002 proposed budget. These 10 agencies accounted for about $330 million in fiscal year 2001 actual expenditures, or about 6 percent of the District’s total fiscal year 2001 actual expenditures. These agencies included the Child and Family Services Agency, which was under receivership until June 15, 2001 (with fiscal year 2001 actual expenditures of $189 million) and public charter schools (with fiscal year 2001 expenditures of $137 million). Although it may not be appropriate to include agency performance information in some cases, the performance accountability report should provide a rationale for excluding them. For example, Advisory Neighborhood Commissions, according to the deputy mayor, have a wide range of agendas that cannot be captured in a single set of meaningful measures. Table 3 lists these 10 agencies and their fiscal year 2001 actual expenditures. In addition to these 10 agencies, the District also did not specifically include other areas constituting 11 percent of the District’s fiscal year 2001 actual expenditures. In view of the District’s interest in tying resources to results, the District could further improve its performance accountability reports by linking these budget activities as appropriate to the agencies that are responsible for these expenditures or provide a rationale for exclusion. For example, the Department of Employment Services administers the unemployment and disability funds (with fiscal year 2001 expenditures totaling about $32 million). Similarly, the Office of the Corporation Counsel administers the settlement and judgments fund, which was set up to settle claims and lawsuits and pay judgments in tort cases entered against the District (with fiscal year 2001 expenditures of about $26 million). Table 4 contains a list of these budget activities and fiscal year 2001 actual expenditures. The goals in the fiscal year 2001 performance accountability report were consistent with the goals in the District’s 2001 performance plan. Using a consistent set of goals enhanced the understandability of the report by demonstrating how performance measured throughout the year contributed toward achieving the District’s goals. The District also used clear criteria for rating performance on a five-point scale and reported that these ratings were included in the performance evaluations of cabinet agency directors who had performance contracts with the mayor. In addition, according to a District official, the District will be able to provide information on any future changes made to its performance goals through its new performance management database. The District has made substantial progress in improving its performance planning and reporting efforts by focusing on measuring progress toward achieving a consistent set of goals. In our June 2001 review of the District’s 2000 performance accountability report, we had raised concerns that the District’s performance management process was in flux, with goals changing continually throughout the year. Further, the District did not discuss the reasons for these changes. This year, the goals were consistent and the District provided some information about upcoming changes that could be anticipated in fiscal year 2002 goals. In addition, according to the 2001 report, the District has developed a performance measures database to allow it to document changes to individual goals and measures that are proposed in the agencies’ fiscal year 2003 budget submissions. One of the District’s enhancements to its 2001 performance accountability report was reporting on a five-point performance rating scale, as compared to the three-point performance rating scale it used in its fiscal year 2000 report. The five-point scale was designed to be consistent with the rating scale used in the District’s Performance Management Program, under which management supervisory service, excepted service, and selected career service personnel develop individual performance plans against which they are evaluated at the end of the year. The five ratings are: (1) below expectations, (2) needs improvement, (3) meets expectations, (4) exceeds expectations, and (5) significantly exceeds expectations. According to the fiscal year 2001 performance accountability report, this scale was used to evaluate the performance of cabinet agency directors who held performance contracts with the mayor. It stated that 60-percent of each director’s performance rating was based on the agency-specific goals included in the agency’s performance accountability report, with the other 40-percent based on operational support requirements such as responsiveness to customers, risk management, and local business contracting. Our work has found that performance agreements can become an increasingly vital part of overall efforts to improve programmatic performance and better achieve results. We found that the use of results-oriented performance agreements: strengthened alignment of results-oriented goals with daily operations, fostered collaboration across organizational boundaries, enhanced opportunities to discuss and routinely use performance information to make program improvements, provided a results-oriented basis for individual accountability, and maintained continuity of program goals during leadership transitions. The District’s fiscal year 2001 performance accountability report reflected improvement in adhering to the statutory requirements in the Federal Payment Reauthorization Act. The District’s 2001 report was timely and included information on the level of performance achieved for most goals listed. It included the titles of the District management employee most directly responsible for the achievement of each of the goals and the title of that employee’s immediate supervisor, as required by the statute. We also found that the names and titles on the performance contracts of the cabinet level officials we reviewed matched the names in the performance report as the immediate supervisor for all of the goals. Although the report contains information on certain court orders, the report could be improved by providing clearer and more complete information on the steps the District government has taken during the reporting year to comply with those orders and by including updated information on the court orders applicable to the District as required by the act. The District identified the level of performance achieved for most of the goals in its 2001 report. The report contains a total of 214 performance goals that are associated with the 66 agencies covered. Of these 214 performance goals, 201 goals (or 94 percent) include information on whether or not the goal was achieved, and only 13 did not include information on the level of performance. As shown in table 1, the 13 goals that did not include the level of performance were associated with eight agencies. For example, the District’s State Education Office did not provide this information for four of its seven goals because the reports and information needed to achieve the goals had not been completed. Although the District’s 2001 performance accountability report included some information on certain court orders imposed upon the District and the status of its compliance with those orders, the act calls for a statement of the status of any court orders applicable to the District of Columbia government during the year and the steps taken by the government to comply with such orders. The 2001 report contains information on the same 12 court orders involving civil actions against the District reported on for fiscal years 1999 and 2000. Among these 12 orders are 2 orders that the fiscal year 2001 report lists as no longer in effect in 2001. One of these court orders involved a receivership that terminated in May 2000. The other involved a maximum-security facility that closed at the end of January 2001. The 2001 report does not disclose whether or not any new court orders were imposed on the District during fiscal year 2001. The summaries that the District provides on the status of these court orders could be more informative if they contained clearer and more complete information on the steps taken by the District government to comply with the court orders. For example, according to the District’s 2001 report, the case Nikita Petties v. DC relates to DCPS transportation services to special education students and the timely payment of tuition and related services to schools and providers. The report’s summary on the status of this case states: “The School system has resumed most of the transportation responsibilities previously performed by a private contractor. A transportation Administrator with broad powers had been appointed to coordinate compliance with Court orders. He has completed his appointment and this position has been abolished.” This summary does not provide a clear picture of what steps the school system is taking to comply with the requirements resulting from this court order. The act, however, calls for the District to report on the steps taken by the government to comply with such orders. The District recognized in its 2001 performance and accountability report that its performance management system is a work-in-progress and stated that there are several fronts on which improvements can be made. In the spirit of building on the progress that the District has made in improving its performance accountability reports over the last 2 years, there are three key areas where we believe that improvements in future performance accountability reports are needed. First, the District needs to be more inclusive in reporting on court orders to more fully comply with the act’s requirements. Second, as part of the District’s emphasis on expanding its performance-based budgeting approach, the District needs to validate and verify the performance data it relies on to measure performance and assess progress, present this information in its performance accountability reports, and describe its strategies to address any known data limitations. Finally, the District needs to continue its efforts to include goals and measures for its major activities, and it should include related expenditure information to provide a more complete picture of the resources targeted toward achieving an agency’s goals and therefore help to enhance transparency and accountability. Since this is the third year that the District has had to develop performance and accountability reports, the District has had sufficient time to determine how best to present information on the status of any court orders that are applicable to the District of Columbia during the fiscal year and the steps taken to comply with those orders. However, the District has continued to report on the same 12 court orders for fiscal years 1999, 2000, and 2001. By limiting its presentation to the same 12 court orders, the District’s current report does not provide assurance that the information in its performance accountability report reflects court orders applicable during the fiscal year. Court orders have an important effect on the District’s performance, as reflected by the chief financial officer’s statement that the District’s “unforeseen expenses are often driven by new legislative imperatives, court-ordered mandates, and suits and settlements.” As another indication of their importance, 1 of the 11 general clauses in performance contracts with agency directors addresses the directors’ responsiveness to court orders. To make future reports more useful, the District should include information on the status of court orders it has not previously reported on as well as those applicable during the fiscal year, including those that may have been vacated during the fiscal year and the steps taken to comply with them. The District should establish objective criteria for determining the types of court orders for which it will provide specific compliance information for future performance accountability reports, and it should consider ways to provide summary information related to any other court orders. In establishing objective criteria, the factors could include the cost, time, and magnitude of effort involved in complying with a court order. If the District government has not acted to comply with a court order it should include an explanation as to why no action was taken. The District’s 2001 report contains a statement that “Following the publication of the FY 1999 Performance Accountability Report, GAO and the District’s Office of Corporation Counsel agreed upon a list of 12 qualifying orders that should be included in the District’s future Performance Accountability Reports.” We did not intend to limit future reporting to only the 12 court orders first reported by the District for fiscal year 1999. We agreed on the list of 12 court orders because, at that time, the District had difficulty identifying all the court orders as required by statute. However, we believe that the District now has had time to develop criteria and a system for ensuring that updated and accurate information on the status of applicable court orders can be presented in its future performance accountability reports. Therefore, we are recommending that the mayor ensure that such steps are taken. The District has identified data collection standards as one of the areas it is working to improve. As with federal agencies, one of the biggest challenges the District faces is developing performance reports with reliable information to assess whether goals are being met or how performance can be improved. Data must be verified and validated to ensure the performance measures used are complete, accurate, consistent, and of sufficient quality to document performance and support decision making. Data verification and validation are key steps in assessing whether the measures are timely, reliable, and adequately represent actual performance. The District’s performance and accountability reports should include information obtained from verification and validation efforts and should discuss strategies to address known data limitations. As reported in our June 2001 report on the District’s fiscal year 2000 performance accountability report, the District had planned to issue performance review guidelines by the end of the summer of 2001. These guidelines were to be issued in response to an Inspector General’s finding that the agencies did not maintain records and other supporting documentation for the accomplishments they reported regarding the fiscal year 2000 performance contracts. The District included information in its fiscal year 2003 budget instructions regarding performance measures emphasizing the importance of high quality data. Although not required for agencies’ budget submissions, the guidance called for every agency to maintain, at a minimum, documentation on how it calculated each measure and the data source for each measure. In its 2001 performance accountability report, the District said it plans to address the development of data collection standards. The District plans to begin developing manuals to document how data for each performance measure is collected, how the measure is calculated, and who is responsible for collecting, analyzing, and reporting the data. A further step the District can consider is ensuring that these data are independently verified and validated. A District official acknowledged that validating and verifying performance information is something the District would deal with in the future. Credible performance information is essential for accurately assessing agencies’ progress toward the achievement of their goals and pinpointing specific solutions to performance shortfalls. Agencies also need reliable information during their planning efforts to set realistic goals. Decision makers must have reliable and timely performance and financial information to ensure adequate accountability, manage for results, and make timely and well-informed judgments. Data limitations should also be documented and disclosed. Without reliable information on costs, for example, decision makers cannot effectively control and reduce costs, assess performance, and evaluate programs. Toward that end, the District must ensure that its new financial management system is effectively implemented to produce crucial financial information, such as the cost of services at the program level, on a timely and reliable basis. Although the District has made progress in presenting program performance goals and measures, the 2001 report did not contain goals and measures for all of its major activities and it did not include information on other areas that accounted for 11 percent of its annual expenditures. The District could enhance the transparency and accountability of its reports by continuing its efforts to ensure that agencies establish goals and measures that they will use to track performance during the year and by taking steps to ensure that agencies responsible for other budget activities (as shown in table 4) include these areas in their performance reports. The District did not include, for example, goals and measures for DCPS, although it did provide a copy of a testimony and stated that this was included, at least in part, to address concerns we had raised in our June 2001 report that the District’s fiscal year 2000 performance accountability report did not cover DCPS. The District also did not include another 10 agencies in its 2001 performance accountability report and indicated that it is taking steps to include relevant goals and measures for some of these agencies in the next year’s report. In addition to including goals and measures for the District’s significant activities, the District should consider including related expenditure information to help ensure transparency and accountability. We found, for example, that the Department of Employment Services administers the unemployment and disability funds but this information was not linked in the District’s 2001 performance accountability report. By linking expenditures to agencies that are responsible for them, the District can further improve its future performance accountability reports by providing a more complete picture of performance. The District, like several federal agencies, has found that it needed to change its performance goals—in some cases substantially—as it learned and gained experience during the early years of its performance measurement efforts. The District has continued to make progress in implementing a more results-oriented approach to management and accountability and issuing a timely and more complete performance accountability report. As we have seen with federal agencies, cultural transformations do not come quickly or easily, and improvements in the District’s performance management system are still underway. Despite the important progress that has been made, opportunities exist for the District to strengthen its efforts as it moves forward. In order to more fully comply with the Federal Payment Reauthorization Act of 1994, which requires the District to provide a statement of the status of any court orders applicable to the government of the District of Columbia during the year and the steps taken by the government to comply with such orders, the mayor should ensure that the District establish objective criteria to determine the types of court orders for which it will provide specific compliance information for future performance accountability reports. In establishing objective criteria, the factors could include the cost, time, and magnitude of effort involved in complying with these court orders. If the District government has not acted to comply with the court orders it should include an explanation as to why no action was taken. In addition, the District should provide summary information related to other applicable court orders in its performance accountability reports. The Mayor of the District of Columbia should also ensure that future performance accountability reports include information on the extent to which its performance measures and data have been verified and validated and discuss strategies to address known data limitations, and include goals and performance measures for the District’s significant activities and link related expenditure information to help ensure transparency and accountability. On April 2, 2002, we provided a draft of our report to the mayor of the District of Columbia for his review. In response to our request, the deputy mayor/city administrator met with us on April 4 to discuss the draft and provided us with written comments on April 8. His written comments appear in appendix III. Overall, the deputy mayor stated that he agreed with the findings of the report and concurred with the report’s recommendations. He stated that clear and meaningful performance reports are essential to communicate the extent to which the District has or has not met its goals and commitments to make those improvements. Further, he stated that the findings and recommendations in this report were consistent with the District government’s intent of further improving its public reporting. The deputy mayor stated that the District would adopt our recommendation to develop objective criteria to determine the types of court orders for which it will provide specific compliance information for future performance accountability reports. Our recommendation also stated that the District should more fully comply with the statute by reporting information on the steps taken by the District government to comply with these orders. The deputy mayor said that they would provide such additional information although he stated that the statute does not specifically require that this information be provided. However, the Federal Payment Reauthorization Act of 1994 (P.L. 103-373) section 456(b)(C) requires that the District’s performance accountability report contain “a statement of the status of any court orders applicable to the government of the District of Columbia during the year and the steps taken by the government to comply with such orders.” We encourage the District government to comply with this requirement and concur with its comment that providing this information would make the report more informative and useful to Congress and the general public. The deputy mayor also concurred with our recommendation that the District’s future performance reports include information on the extent to which its performance data have been validated and verified. The deputy mayor said that seven District agencies participating in the District’s performance based budgeting pilot would be developing data collection manuals this summer. We encourage the District to proceed with this effort as well as to develop and report on strategies for addressing limitations in its data collections efforts. We have suggested in prior reports that when federal agencies have low quality or unavailable performance data, they should discuss how they plan to deal with such limitations in their performance plans and reports. Assessments of data quality do not lead to improved data for accountability and program management unless steps are taken to respond to the data limitations that are identified. In addition, alerting decisionmakers and stakeholders to significant data limitations allows them to judge the data’s credibility for their intended use and to use the data in appropriate ways. Regarding the independent verification of performance data, the deputy mayor stated that the District's ability to secure independent verification of more than selected goals and measures is limited by the resources available to the District's Office of the Inspector General (OIG). He said that the OIG conducted spot-check audits of selected scorecard goals in the fiscal year 2000 performance accountability report and although these limited audits allowed the District to determine the validity of only those particular measures, this effort provided valuable observations and suggestions on how District agencies could improve its data collection practices. He also said that his office has discussed initiating additional spot-check audits of selected goals and measures with the OIG during fiscal year 2002. We agree that such spot checks would be useful. The knowledge that the OIG will be spot-checking some performance data during each fiscal year provides a good incentive to develop and use accurate, high-quality data. In our prior work, we have encouraged federal agencies to use a variety of strategies to verify and validate their performance information, depending upon the unique characteristics of their programs, stakeholder concerns, performance measures, and data resources. In addition to relying on inspector general assessments of data systems and performance measures, the District can use feedback from data users and external stakeholders to help ensure that measures are valid for their intended use. Other approaches can include taking steps to comply with quality standards established by professional organizations and/or using technical or peer review panels to ensure that performance data meet quality specifications. The District can also test the accuracy of its performance data by comparing it with other sources of similar data, such as data obtained from external studies, prior research, and program evaluations. The deputy mayor said that the District would be making efforts to include additional agencies and budget activities in future performance reports. We encourage the District to proceed with these efforts. Of the 10 agencies that were not included in the fiscal year 2001 performance report, the District has already included 3 agencies (the Office of Asian and Pacific Islander Affairs, the Child and Family Services Agency, and the Office of Veteran Affairs) in its fiscal year 2002 performance plan issued in March 2002. In addition, the deputy mayor stated that three additional agencies (the Office of the Secretary, the Housing Finance Agency, and the National Capital Revitalization Corporation) would be included in the District’s consensus budget to be submitted to the Council of the District of Columbia in June 2002. With regard to the budget activities that were not included in the District’s fiscal year 2001 performance report, the deputy mayor agreed that it would be appropriate to develop performance measures for six funds, such as settlements and judgments and administration of the disability compensation fund. The deputy mayor acknowledged that establishing performance measures for administering an additional six funds, such as the Public Benefit Corporation, would have been appropriate but they no longer exist. The deputy mayor said that the District of Columbia Retirement Board manages two funds that had relevant performance measures in the District’s 2001 report. We noted, however, that these two retirement funds were not specifically identified in the 2001 performance accountability report. We are sending copies of this report to the Honorable Anthony A. Williams, Mayor of the District of Columbia. We will make copies available to others upon request. Key contributors to this report were Katherine Cunningham, Steven Lozano, Sylvia Shanks, and Susan Ragland. Please contact me or Ms. Ragland on (202) 512-6806 if you have any questions on the material in this report. The District’s fiscal year 2001 performance accountability report included 66 agencies accounting for 83 percent of the District’s operating budget for fiscal year 2001. Table 2 lists these agencies and their fiscal year 2001 actual expenditures. The District’s fiscal year 2001 performance accountability report did not include 10 District agencies primarily because they did not publish performance goals in the District’s 2001 performance plan. Table 3 lists these agencies and their fiscal year 2001 actual expenditures. In addition to these 10 agencies, we identified several budget activities— accounting for 11 percent of the District’s total fiscal year 2001 actual expenditures—that were not included in the fiscal year 2001 performance accountability report. Table 4 lists these activities and related fiscal year 2001 actual expenditures. | This report examines the progress the District of Columbia has made with its fiscal year 2001 performance accountability report and highlights continuing challenges facing our nation's capital. The District must submit a performance accountability plan with goals for the coming fiscal year and, at the end of the fiscal year, a performance accountability report on the extent to which it achieved these goals. GAO found that the District's Performance Accountability Report for Fiscal Year 2001 provided a more complete picture of its performance and made progress in complying with statutory reporting requirements by using a consistent set of goals. This allowed the District to measure and report progress toward the goals in its 2001 performance plan. Specifically, it reported information on the level of performance achieved, the titles of managers and their supervisors responsible for each goal, and described the status of certain court orders. The District has made progress over the last three years in its performance accountability reports and established positive direction for enhancements in court orders, its fiscal year 2003 performance based budgeting pilots, and performance goals and measures. |
Congress passed the Omnibus Trade and Competitiveness Act of 1988 (the 1988 Trade Act) to achieve macroeconomic and exchange rate policies consistent with a sustainable current account balance. The law increases the executive branch’s accountability for assessing the impact of international economic and exchange rate polices on the economy. Congressional concerns at the time included concern that the exchange rates of other countries placed competitive pressures on U.S. producers. The 1988 Trade Act directs the Secretary of the Treasury to analyze the exchange rate policies of foreign countries for the purpose of considering whether any are manipulating their currencies to gain an unfair trade advantage and to report on international economic policies, including exchange rates. To find that a country is manipulating the rate of exchange between its currency and the U.S. dollar within the meaning of the Trade Act, Treasury must determine that the country is manipulating the exchange rate for the purpose of gaining an unfair trade advantage or preventing effective balance of payments adjustments, and has a material global current account surplus and a significant bilateral trade surplus with the United States. If Treasury finds that a country is manipulating its currency as defined by the Trade Act, the act requires Treasury to initiate negotiations with that country to ensure a foreign currency exchange rate adjustment that eliminates the unfair trade advantage. Treasury’s international policy and exchange rate reports must meet eight reporting requirements, including an analysis of currency market developments, an assessment of the impact of the exchange rate of the dollar on three broad aspects of the U.S. economy, and an analysis of capital flows. (See app. II for the exact language of the law.) China and Japan follow different policies for determining their currency values. China has, since 1994, when it unified its dual exchange rate system, pegged the value of its currency, the renminbi, to the U.S. dollar. Chinese authorities maintain this peg by standing ready to buy and sell renminbi in exchange for other currencies within a narrow band around the fixed rate. When there is an excess supply of foreign exchange at this rate, such as from surpluses in trade or net private capital flows, China’s purchases of that excess lead to an increase in its foreign reserves. China maintains controls on capital flows that to some extent limit the volume of transactions in the foreign exchange market, although these controls have not prevented substantial recent capital inflows. In contrast, the Japanese yen is on an independent float, which means that its value relative to other currencies is determined by demand and supply in the currency market. In the past, Japan has carried out significant interventions in the foreign exchange market through the sale of yen in exchange for U.S. dollars, which has put downward pressure on the value of the yen relative to the U.S. dollar. Nevertheless, from January 2002 through January 2005, the yen’s value relative to the dollar increased 22 percent, from 132 yen per U.S. dollar to 103 yen per U.S. dollar. Japan has not intervened in the foreign exchange market since March 2004. Although the Chinese and Japanese governments have carried out certain economic policies and practices related to their currencies’ values that have raised concerns among observers, Treasury has found in recent reports that neither country meets all the legal criteria for currency manipulation. Treasury’s overall approach to determining the presence of currency manipulation under the terms of the Trade Act includes screening countries and economies using a range of indicators to identify some for closer examination, applying legally mandated criteria, and considering multiple aspects of economic conditions and activities. Although Treasury has cited Taiwan, Korea, and China for currency manipulation in the past, it has found no such instances since 1994. Treasury’s Office of International Affairs begins its analysis of currency manipulation by soliciting input from country desk officials responsible for monitoring economic activity. Treasury officials stated that they use analyses and information obtained throughout the year as the basis for determining whether a country is manipulating its currency. Treasury officials responsible for the currency manipulation analysis compile available information on exchange rates and other economic conditions. Treasury also collects information from external sources, such as private sector experts, and meets regularly with the IMF on broad international economic policy issues. Treasury officials use the collected data to identify those economies deserving closer examination. In addition to including bilateral trade surplus and global current account surplus information in this initial consideration, they also take into account other factors, such as changes in currency value, capital flow conditions, and country size. (Fig. 1 presents the ranking of economies with the largest bilateral trade surpluses with the United States, and fig. 2 presents the ranking of those same economies according to their current account balance as a percentage of gross domestic product.) Treasury does not usually scrutinize economies with large, obviously explainable, trade balances, such as major oil-exporting nations, for currency manipulation. On the other hand, Treasury reviews some economies regardless of economic indicators. For instance, Treasury consistently reviews the activities of major U.S. trading partners, such as Japan, the European Union, and Canada. It also monitors the three economies that it previously found to be manipulating their currencies— Taiwan, Korea, and China. Treasury selectively includes other nations in currency manipulation assessments when it determines that economic conditions merit. Treasury officials stated that they make a positive determination on currency manipulation only when all the conditions specified in the Trade Act are satisfied. According to these officials, to reach a positive finding of currency manipulation under the Trade Act, Treasury must find that the economies have a material global current account surplus and a significant bilateral trade surplus with the United States, and they are manipulating their currency with the intent of gaining trade advantage. Treasury has significant flexibility in determining whether countries meet these criteria. Treasury officials told us they do not have operational definitions of a “material” global current account surplus or a “significant” bilateral trade surplus. Treasury officials stated that they do not limit their analysis to the use of the material global current account surplus and significant bilateral trade surplus criteria listed in the Trade Act, but rather consider multiple aspects of the economy. Treasury officials also stated that they do not use a definitive checklist to make their determinations. Treasury officials told us that the country-specific economic and international trade factors they consider include restrictions and regulations governing the use and retention of foreign exchange and international financial flows; movement of exchange rates, authorities’ intervention in foreign exchange markets, and the effectiveness of that intervention; accumulation of foreign exchange reserves; institutional development related to banking and financial sectors; macroeconomic indicators, including gross domestic product (GDP) growth rates, inflation, and unemployment rates; savings/investment balances and underlying factors; foreign investment and international portfolio investment flow patterns; trade regime barriers; and external shock factors such as financial crises, oil price hikes, or natural disasters. The 1988 Trade Act does not require Treasury to determine if a currency is undervalued while performing its currency manipulation assessments. Although Treasury has in the past included observations on whether currencies were undervalued, it no longer does so. While Treasury officials told us they do not make an official determination on undervaluation, in its March 2005 report to Congress (discussed below), Treasury included measures of undervaluation among the indicators it considers in its manipulation analysis. Upon completion of the currency manipulation assessments, managers within the Office of International Affairs prepare recommendations for the approval of the Under Secretary for International Affairs. In the case of a positive finding of currency manipulation, Treasury initiates negotiations with officials of the economy in question, as called for by the Trade Act. Treasury generally summarizes the results of the currency manipulation assessments in its semiannual report to Congress, but does not explain how it weighs the multiple economic factors it analyzes when making its currency manipulation determinations. Over time, Treasury reports have included varying lists of factors the department considers in conducting its currency manipulation analysis. Congressional concern over Treasury’s currency manipulation assessments led to a mandate in the fiscal year 2005 Consolidated Appropriations Act requiring Treasury to report on how the statutory requirements of the 1988 Trade Act could be clarified administratively to enable currency manipulation to be better understood by the American people and by Congress. Treasury issued its report on March 11, 2005. In this report, Treasury provided a high-level discussion of factors it considers when conducting its currency manipulation assessments, including measures of undervaluation, capital controls, and trade balances, and also described difficulties related to rendering manipulation assessments. Treasury did not—and was not required to—provide information on a country-specific basis about recent currency manipulation assessments. Since 1994, Treasury has not cited any economies for manipulating their currency as defined by the Trade Act. Treasury officials stated they have closely monitored recent economic behavior in China and Japan, due in part to the rapid accumulation of foreign currency reserves in those countries. Although Treasury has not cited China recently, it has engaged in discussions encouraging China to move to a more flexible exchange rate regime. Treasury did not find that Japan was manipulating its currency in 2003 and 2004. Treasury officials told us that they viewed Japan’s interventions as a part of macroeconomic policy aimed at combating deflation in Japan, and they expressed skepticism about the efficacy of intervention to affect the yen’s value. Since the enactment of the 1988 Trade Act, Treasury has identified three economies—Taiwan, Korea, and China—that manipulated their currencies under the Trade Act’s terms. Treasury first cited Taiwan and Korea in 1988 and China in 1992. Taiwan was cited again in 1992. Each citation lasted for at least two 6-month reporting periods for Taiwan and Korea, while China’s lasted for five reporting periods. Treasury reported evidence that the criteria for currency manipulation under the Trade Act had been met in most of these cases. At the time of their citations, Taiwan, Korea, and, on three occasions, China had relatively large bilateral trade surpluses with the United States and relatively large global current account surpluses. However, China, on two later occasions in the mid 1990s, had either a substantially declining current account surplus or a current account deficit when cited by Treasury for currency manipulation. The three economies also had other economic characteristics that Treasury considered when it determined they were manipulating their respective currencies. For instance, all three economies had also been rapidly accumulating foreign exchange reserves. In addition, for both Taiwan and Korea, Treasury found excessive restrictions on foreign exchange markets and capital controls and evidence of heavy direct intervention in foreign exchange markets by the authorities of Taiwan and Korea. In China’s case, Treasury was concerned by Chinese efforts in 1991 and 1992 to frustrate effective balance of payments adjustments through the use of a dual exchange rate system. Treasury cited continued devaluations of the official exchange rate and excessive controls on the market rates. (See app. III for more details on Treasury’s previous findings of manipulation for these three economies.) As required by the Trade Act, Treasury entered into negotiations with Taiwan, Korea, and China, and all three made substantial reforms to their foreign exchange regimes. In addition, their currencies appreciated and external trade balances declined significantly until they reached the point at which the three were removed from the list of currency manipulators. Treasury continues to monitor the policies and practices of these economies for evidence of currency manipulation. In recent reports Treasury has not found that either China or Japan meets the statutory criteria for currency manipulation. Since 2001 both countries have had periods of increasing current account surpluses and also periods of rapid accumulation of foreign exchange reserves. With respect to China, while Treasury did not report data on China’s global current account surplus for the second half of 2003 or the first half of 2004, Treasury officials stated that the surplus had not reached a material level. In April 2004, Treasury reported that China’s overall trade surplus had been 2.6 percent of GDP in the second half of 2003. In December 2004, Treasury reported that for the first half of 2004 China had an overall trade deficit of 1 percent of its GDP. In the same report, Treasury stated that while Chinese foreign exchange reserves had risen sharply, the accumulation was due in large part to steady foreign direct investment inflows and a sharp increase in other capital inflows. (See app. IV for more details on China’s external account development in recent years.) Treasury officials also stated that they do not think China’s current restrictions in foreign exchange markets and other administrative controls on trade are comparable to conditions in the early 1990s. At that time, important factors in Treasury’s determinations were China’s pervasive direct controls on external trade activities and a dual exchange rate regime with massive restrictions and controls. Since then, China has removed restrictions on the convertibility of the renminbi for trade transactions and substantially liberalized its trade regime, including implementing a variety of reforms related to its accession to the World Trade Organization in 2001. Since 1994, China has followed a policy of maintaining its currency peg to the dollar regardless of economic conditions, according to Treasury officials. For example, during the Asian financial crisis of the late 1990s, China kept the renminbi’s value steady rather than depreciating it to stay competitive with the cheaper currencies of other Asian exporting economies. While this helped maintain the stability of its own economy and the region, it was not consistent with a policy of keeping a cheap currency for trade advantage, according to Treasury officials. Despite the absence of a positive determination on currency manipulation, Treasury has stated that China should move from its long-term fixed exchange rate and has engaged in discussions with China to advocate a shift to market-based exchange rate flexibility. The Chinese government has indicated its willingness to move to a flexible exchange rate regime after undertaking a series of preparative steps but has established no specific timetable to complete them. To date, China has taken some steps to reduce barriers to capital outflows, liberalize interest rates, remove investment restrictions, and strengthen its financial infrastructure. Treasury has provided technical assistance to help China develop market mechanisms needed for the transition to a flexible regime, including central bank supervision of currency risk and regulation of foreign exchange derivative markets. With respect to Japan, Treasury officials stated that the country’s ongoing current account surplus reflects a long-term imbalance between savings and investment. In the last three exchange rate reports covering 2003 and 2004, Treasury noted that Japan justified its currency market interventions as a response to market overshooting, or excess volatility, and that such activity did not target particular exchange rate values. Treasury officials stated that Japan’s interventions were part of a macroeconomic policy aimed at combating domestic deflationary pressures. In addition, Treasury officials expressed general skepticism about the efficacy of intervention. Japan has not intervened to prevent the appreciation of the yen since March 2004. Treasury has generally complied with the reporting requirements mandated by the 1988 Trade Act (see table 1), although its discussion of U.S. economic impacts has become less specific over time. Treasury exchange rate reports have consistently included information responding to four requirements: (1) analysis of currency market developments, (2) evaluations of underlying conditions in the United States and other economies, (3) descriptions of currency market interventions, and (4) analysis of capital flows. Treasury can respond to a fifth reporting requirement, recommendations for changes necessary to attain a sustainable current account balance, at its discretion. A sixth requirement, reporting outcomes of negotiations, is only relevant when Treasury makes a finding for currency manipulation under section 3004 of the act, and Treasury has complied with this requirement when applicable. Treasury did not include updates for the seventh requirement—U.S.–IMF consultations—in six reports from 2001 to 2004. According to Treasury officials, by this time summaries and complete reports of IMF consultations with the United States had become publicly available on the Internet, and reporting on these consultations was unnecessary. The December 2004 report included an Internet link to IMF consultation information. Treasury has over time changed its approach for complying with its remaining requirement—an assessment of the impact of the exchange rate on the U.S. economy. According to Treasury officials and our analysis of the exchange rate reports, Treasury’s view of the role of exchange rates on the U.S. balance of payments and the economy in general has changed since 1988. Treasury’s reports generally discussed at least some elements of the impact-reporting requirement from the late 1980s through the 1990s. From 1988 into the early 1990’s, Treasury’s reports generally discussed exchange rate effects on U.S. external balances and economic growth. From 1994 through 1999 and into 2000, Treasury reports generally advocated a “strong dollar” policy. Reports in 1994 through 1997 discussed specific U.S. benefits of such a policy, such as lower inflation and higher investment and economic growth. Treasury’s impact-related analysis after the 1990’s cited the importance of broader macroeconomic and structural factors behind global trade imbalances. Treasury viewed exchange rates as one of several interacting economic variables needing attention to address global imbalances. For example, in the October 2003 and April 2004 reports, Treasury reported that the current account deficit represented the gap between savings and investment, and its sustainability depended on the attractiveness of U.S. capital markets to foreign investors. Its analysis also emphasized the importance for U.S economic interests of strong growth of U.S. trading partners. Treasury’s most recent report in December 2004 did identify exchange rate flexibility for certain Asian economies as an area of policy the administration is following to reduce global imbalances. Given its broad approach to impact-related analysis, Treasury’s semiannual reports do not contain discrete examinations of the effect on the U.S. economy of changes in the dollar’s value. Thus, Treasury’s reports do not specifically address the impact of the dollar on aspects of economic activity listed in the 1988 Trade Act, including production, employment, and global industrial competition. Treasury states that it does consider the impact of the exchange rate on these variables and that their broader approach meets the intent of the impact reporting requirements set forth in the 1988 Trade Act. Many experts maintain that China’s currency is significantly undervalued, while some believe that undervaluation is not substantial or that calculating reliable estimates is not possible. Even among experts who believe that China’s currency is undervalued, there is no consensus on how and when China should move to a more flexible exchange rate regime and whether or not capital account liberalization, including, for example, lifting restrictions on outward flows of Chinese capital, should be a part of that process. Most of the estimates we reviewed indicated that China’s currency is undervalued to some extent, with some experts suggesting substantial undervaluation and others slight misalignment. While there is no consensus methodology for determining whether a country’s currency is undervalued, experts have applied a number of commonly used approaches to the case of China. (See app. V for details of the various methodologies and their limitations.) These approaches generally involve determining an equilibrium exchange rate, broadly defined as the exchange rate that is consistent with a country’s economic fundamentals, when the country is operating at full employment and in a free market. As table 2 illustrates, estimates of renminbi undervaluation range from none to over 50 percent. Some of these estimates are rough calculations based on “rule-of-thumb” assumptions while others are based on formal models. In addition, some of these estimates may be most appropriately categorized as measures of near-term undervaluation or short-term pressure indicators. Moreover, the margins of error for these estimates are generally unknown. The significant variation in estimates of remninbi undervaluation can be attributed in part to different methodological approaches, but similar methodologies can also yield differences. The absolute version of the Purchasing Power Parity (PPP) methodology, which determines the exchange rate at which identical goods would trade at the same price in both countries, produces estimates that generally show the renminbi is considerably undervalued. The External Balance approach is based on calculating an exchange rate that would result in a country achieving a sustainable balance in its external accounts, such as its current account balance or its trade balance. In the studies we reviewed, this approach generally produced estimates of currency undervaluation for China from 4 to 25 percent, with one estimate of 40 percent. Moreover, there are often significant differences in estimates even when similar methodologies are used. For example, experts who use the Behavioral Equilibrium Exchange Rate (BEER) approach, which uses econometric relationships between exchange rates and other economic variables to estimate an equilibrium exchange rate, have found renminbi undervaluation ranging from 11 to 47 percent. Some experts doubt that equilibrium exchange rates can be estimated and thus believe that whether a currency is under- or overvalued cannot be reliably determined. Treasury officials and some other experts we spoke with stated that estimating equilibrium exchange rates is especially challenging for developing economies with rapidly changing economic structures, such as China. According to Treasury, the determination of under- or overvaluation requires analysis of key economic variables, the measures for which are subject to considerable uncertainty in China. Moreover, determining an equilibrium exchange rate is especially difficult for China because China restricts the outflow of funds from the country. (See app. IV for a discussion of China’s capital controls.) Some observers and analysts view China’s growing foreign exchange reserves as evidence that the renminbi is undervalued. China’s foreign exchange reserves increased by $399 billion dollars—185 percent—from the end of 2001 to the end of 2004. These observers maintain that the reserves, which partly reflect China’s surpluses in global trade and foreign direct investment (FDI), are evidence that the value of the renminbi is too low relative to the demand for renminbi-denominated goods, services, and other investments; as a result, China must purchase large amounts of dollars to keep the renminbi’s value from increasing beyond its U.S. dollar peg. Using reserve accumulations as evidence of a mismatch between the current value of the renminbi and its long-run equilibrium value has limitations, however, according to several analysts. China’s foreign reserve accumulation has several components: the current account balance, FDI net inflows, non-FDI net inflows (which include portfolio investment such as stocks and other investments), and undocumented capital—referred to as errors and omissions. China’s current account surpluses and FDI inflows were the primary components of the $117 billion increase in its reserves in 2003, accounting together for about 80 percent. Net non-FDI inflows and errors and omissions accounted for about 20 percent of the reserve increase. (See further details in app. IV.) Treasury has urged China to move to a market-based flexible exchange rate and take steps to remove restrictions on capital flows. There is debate regarding steps and timing on both issues. With respect to whether and when China should change its exchange rate policy, there are varying views even among experts who believe the currency is undervalued. Some experts have recommended that China immediately revalue the renminbi, either relative to the U.S. dollar or to a broader group of currencies. Others have suggested that China should move to a more flexible system—with a freely floating exchange rate being the most flexible. Analysts have identified potential advantages of such policy changes for China and also for other countries. Analysts have also identified a number of challenges for China. For example, some experts have cautioned that there could be economic costs to China if the monetary authorities revalue the currency and guess wrong about how large the revaluation should be. They have stated that a small revaluation could encourage further speculative capital flows into the country in anticipation of a further revaluation, which would increase reserves. Some have also expressed concern that a large appreciation in the renminbi’s value could unnecessarily slow down the Chinese economy and worsen labor conditions in the country, which has high unemployment in certain regions. There are also varying views on changes in China’s policies regarding restrictions on capital flows. China currently restricts outward flows of Chinese capital for foreign direct investment and purchases of securities abroad, although it eased some restrictions in 2004. (See app. IV for additional information on these restrictions.) A number of advocates of greater exchange rate flexibility maintain that China is not ready for significant capital account liberalization and that the government should maintain some capital controls after moving to a more flexible exchange rate. One reason cited is that liberalization would expose China’s financial sector to risk if, for example, banks in China that are not financially strong experienced erosion of their deposit base from investors switching funds offshore. Several policy options advocated for China’s currency involve a gradual or multistep process, which proponents maintain could minimize the potential for adverse effects of revaluation. One expert, for example, has advocated a two-stage currency reform process for China. The first stage would entail pegging the renminbi to a group of currencies, including the dollar, rather than pegging to the dollar alone; a 15 to 25 percent revaluation; and setting a 5 to 7 percent band for renminbi fluctuation against the new currency basket. The second step would be a significant liberalization of capital outflows and adoption of a managed float. The second step would occur following adequate strengthening of China’s banking system. A revaluation of the renminbi could have implications for various aspects of the U.S. economy—with both costs and benefits—although the impacts are hard to predict. First, a higher-valued renminbi would make Chinese exports to the United States more expensive and U.S. exports to China cheaper—with the extent depending on several factors—which could increase U.S. production and employment in certain sectors. Some groups could be negatively affected by a higher-valued renminbi, including U.S. producers who use imports from China in their own production and would face higher prices and costs of production. Consumers in the United States could also face higher prices. Finally, an upward revaluation of the renminbi could also affect flows of capital to the United States from China, which have in recent years accounted for a significant source of financing of the U.S. trade deficit. Although a revaluation of the renminbi relative to the dollar would tend to make U.S. exports to China cheaper and U.S. imports from China more expensive, just how much more expensive China’s imports would become—and the impact on the U.S. trade deficit, production, and employment—would ultimately depend on several factors. Some key factors include the following: How much of the exchange rate appreciation is “passed-through” to higher prices for U.S. purchasers. Experience with other nations generally shows that pass-through is less than complete, particularly in the short term, because contracts for exports to the United States may be written in dollars. Longer term, the extent of pass-through depends on factors such as the extent to which Chinese exports to the United States are made up of inputs from other countries (since these would become cheaper with a stronger renminbi), and the extent to which Chinese exporters reduce their costs or profit margins. The extent of the U.S. market response to the higher prices. In some markets, U.S. purchasers may continue to buy nearly the same volume of Chinese imports at the higher prices, while in others U.S. purchasers may decide to sharply reduce their purchases. The less responsive the overall U.S. demand is to price changes of Chinese imports, the less changes in the renminbi-dollar exchange rate will affect the U.S. trade balance, production, or employment. The same is true on the other side of the market; if Chinese demand for U.S. exports is unresponsive to the lower prices of U.S. goods, Chinese buyers will not buy much more in the short run even if prices of U.S. exports have fallen. The extent to which products now being manufactured in China would be produced in other countries rather than in the United States. It is probable that goods from other countries with low labor costs would replace a portion of Chinese exports to the United States if the renminbi were to increase in value, thus reducing the impact on the U.S. economy. Specifically, some experts believe that decreased imports from China would be largely replaced by slightly higher-priced imports from other low-income countries such as Sri Lanka, Vietnam, Bangladesh, and Pakistan, among others, instead of being manufactured in the United States. Whether other countries follow China and adjust their policies. Some analysts contend that the renminbi’s peg to the dollar induces other East Asian countries to intervene in currency markets to keep their currencies weak against the dollar so that they can remain competitive with China. Some believe that a revaluation by China might encourage other countries to change their exchange rate policies as well. This would magnify the impact of a revaluation on the United States. The time period necessary for these adjustments to take place. While a currency appreciation has some immediate effects, the impacts on the trade statistics, production decisions, and employment generally take a longer time. In the short term, the U.S. trade deficit may increase as it takes more dollars to buy the same amount of Chinese products. As the higher prices are factored into new purchasing decisions, the appreciation would lead to effects on U.S. production and employment that could occur over a period of months or years. (See app. VI for an additional discussion of these and other factors affecting the extent of revaluation impacts.) Changes in the value of a currency like the renminbi could affect the U.S. economy in a variety of ways, and assessing the effects is complex. For example, an increase in the renminbi’s value could affect the mix of jobs in certain sectors, benefiting those sectors that compete directly with foreign products. However, in terms of employment, many experts believe that a rise in the value of the renminbi relative to the dollar would be unlikely to have much, if any, effect on aggregate employment in the United States. This is because the overall level of U.S. jobs is generally viewed as being largely determined by factors such as the domestic labor supply and broader macroeconomic factors such as U.S. monetary policy. In addition, an increase in the value of the renminbi could have other types of impacts that affect the economy more broadly, such as influencing the prices of goods and interest rates. Examples of groups that would be expected to benefit from an upward revaluation of the renminbi include: U.S. firms and workers exporting to China—U.S. exports would become cheaper for Chinese consumers. U.S. firms and workers producing goods that compete with Chinese imports—Chinese imports would become more expensive for U.S. consumers. Low-wage countries other than China—Their exports could displace Chinese exports to the United States. U.S. investors in China—The value of assets in China would increase. Examples of groups that would be expected to experience some losses from an upward revaluation of the renminbi include: U.S. consumers—Imports from China would cost more. Certain U.S. producers—Firms that import Chinese components in the production of final goods would pay more for those components. Borrowers in U.S. capital markets—A possible decrease in capital flows from China could increase pressure on U.S. interest rates. Multinational firms in China—The cost of production in dollars would increase and possibly raise the prices of final goods shipped to the United States. Discussions of a revaluation of the renminbi have tended to focus on the outcome for workers in the U.S. manufacturing sector because U.S. employment in this sector has shrunk considerably in recent years and is believed to be sensitive to international trade. Predicting the manufacturing sector production and employment effects of a change in the renminbi’s value is complex and is related to changes in trade flows. Therefore, some analysts have used estimates of changes in the U.S. trade deficit to estimate potential manufacturing production and employment effects, at least over the short run, although such linkages involve further uncertainties. The following exercise illustrates how possible impacts of a renminbi revaluation on the U.S. trade deficit could vary under different assumptions. The estimates use as a starting point an assumption for the relationship between the overall exchange rate of the dollar and the U.S. trade deficit from the IMF’s April 2004 World Economic Outlook and then illustrate the impact of additional assumptions regarding exchange rate pass-through, import displacement, and follow-on exchange rate adjustments (see table 3). These assumptions are not analytically precise, and other researchers have used different assumptions. As shown in the table, with a hypothetical upward revaluation of 20 percent, the estimates for trade deficit reduction due to a revaluation of the renminbi under these assumptions range from $3.3 billion to $13.3 billion, depending on pass-through, the displacement effect, and follow-on exchange rate adjustments. Estimates outside of the range of estimates provided here could be obtained using different assumptions. These estimates could change further by accounting directly for other factors such as the sensitivity of U.S. demand to price changes of Chinese imports. Some analyses have drawn conclusions about the impact of exchange rate changes on U.S. manufacturing jobs by using additional assumptions to those employed above. For example, one analysis used the assumption that a $1 billion increase in the U.S. trade deficit would lead to a decline in U.S. manufacturing jobs of about 15,000. Applying such a value to estimates of a 20 percent renminbi revaluation, under the assumptions shown in scenario 3, would lead to estimates of manufacturing sector job impacts of about 49,800 jobs. Under scenario 4, with the additional assumption of follow-on exchange rate adjustments if the renminbi were revalued, the manufacturing sector job impact estimate would be 199,000. These analyses have limitations. Researchers have observed that trade affects the demand for manufacturing labor in complex ways, particularly with respect to imported goods and components. Moreover, as noted above, the long-run level of employment in the economy is generally viewed as being determined by demographic and broader macroeconomic factors such as monetary policy. Thus, to the extent there are manufacturing sector job impacts of a renminbi revaluation, they may be offset by job losses in other sectors of the economy. Capital flows must also be considered in an assessment of the implications of a renminbi revaluation. The U.S. bilateral trade deficit with China—and its maintenance of a fixed exchange rate to the dollar—has been accompanied by an inflow of funds into U.S. capital markets from China. This has occurred during a period of an overall rise in inflows of foreign capital accompanying increasing U.S. trade and current account deficits. To the extent that a revaluation of the renminbi would lead to a decrease in the U.S. global current account deficit, it would also be associated with lower capital inflows. Such capital inflows—U.S. borrowing from foreign sources—can benefit the United States by lowering interest rates and stimulating investment and consumption. However, U.S. interest payments on this foreign-held debt are sent abroad. In addition, some analysts believe that U.S. dependence on inflows of foreign capital carries risk because of the potential for foreign investors to decide to hold or purchase less U.S. debt. The potential for, and consequences of, a widespread withdrawal of investment funds from U.S. markets has recently been debated. While some analysts believe that the effects of a foreign withdrawal from U.S. financial markets—or a reduction in foreign purchases of U.S. debt—would have limited effects over the long run, some acknowledge that short-run disruptions, such as the loss of value of assets and higher interest rates, could be significant. According to Treasury data, about 44 percent of the total value of outstanding U.S. Treasury securities held by the public is held by foreigners. At the end of 2004, China held 4.2 percent of the total holdings of outstanding U.S. Treasury securities, which is about 10 percent of these securities held by foreigners (see fig. 3). By far the largest holder of U.S. Treasury securities is Japan, which holds 16.6 percent. The United Kingdom, with 3.0 percent, is third behind China. As figure 4 illustrates, China was one of the largest purchasers of U.S. Treasury securities from 2001 to 2004—$95.4 billion, compared to $367.4 and $168.1 billion for Japan and the United Kingdom, respectively. Like other foreign central banks, China’s central bank has chosen to purchase large quantities of U.S. Treasury securities with renminbi in part because it can buy and sell them quickly with minimal market impact. Figure 4 also shows that, in recent years, China has been a strong purchaser of other types of U.S. securities, especially agency bonds, according to data from the Treasury International Capital (TIC) reporting system. Between 2001 and 2004 China purchased on net about $243.5 billion in total U.S. securities, behind the United Kingdom and Japan. (See app. VII for more data on net purchases of U.S. Treasury securities by China and other countries). While we make no recommendations in this report, we believe that our analysis provides important insights into the debate over exchange rates and U.S. government assessments of currency manipulation. The debate involves several issues that are related, but distinct. The first is currency manipulation. Assessing currency manipulation under the terms of U.S. law is complex and involves both country-specific and broader international economic factors. A second issue is undervaluation of currencies. Countries with undervalued currencies are presumed to obtain trade benefits from the undervaluation and therefore are often assumed to be manipulating their currencies to maintain these benefits. Many experts tend to focus on undervaluation—which Treasury is not required to determine. A third issue is the policy response that is expected from nations that are the focus of the debate. For example, experts who believe that China’s currency is undervalued have varying views about what action China should take, including whether certain policy options entail risks to China’s economy. In this report, we have tried to keep these issues distinct, because we believe it aids in clarifying the debate. The level of concern over exchange rate issues—especially with respect to China--is not surprising given the continuing growth of the U.S. trade deficit, the rapid growth of China’s exports to the United States, and the recent depreciation of the dollar against several major currencies. In addition, as trade agreements reduce many of the industry-specific barriers to world trade, there has been a shift in attention toward the macroeconomic aspects of trade, which include exchange rates as well as national savings and investment rates. News that China’s trade and current account surpluses were higher than expected in 2004 increases the need for good information on factors affecting international trade and financial flows, especially with respect to China, and the implications of these flows for the United States. Congress recently required Treasury to provide information on aspects of its reporting under the 1988 Trade Act, to facilitate better understanding by the American people and Congress. Treasury’s March 2005 report in response to this mandate provided a high- level discussion of key factors Treasury considers in its currency manipulation assessments and sheds light on the complexities of the assessments but did not provide—and was not required to provide-- country-specific information about Treasury’s recent assessments. Since then, Members of Congress have continued to propose legislation to address China currency issues. We believe that the analysis in this report provides a basis for further discussion of currency manipulation concerns. We provided a draft report to the Department of the Treasury. Treasury provided written comments, which are reprinted in appendix VIII. Treasury stated that the report is generally thoughtful and hopes that it will contribute to increased understanding of the complex issues covered in its exchange rate reports. Treasury also emphasized several aspects of its exchange rate assessments and its reports. For example, with respect to reporting on U.S. economic impacts, Treasury stated that when conducting its analysis it does consider how the exchange rate of the dollar affects areas such as the sustainability of the current account deficit, production, and employment. Treasury stated that it believes it is often more helpful to look at underlying developments that affect exchange rates and other macroeconomic conditions rather than to achieve a false sense of precision by isolating the exchange rate in the analysis. Treasury also provided technical comments, which we incorporated in the report as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of its issuance. At that time, we will send copies of this report to interested congressional committees, the Secretary of the Treasury, and other interested parties. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-4128 or at [email protected]. Other GAO contacts and staff acknowledgments are listed in appendix IX. The Chairs of the Senate Committee on Small Business and Entrepreneurship and the House Committee on Small Business asked us to review the Department of the Treasury’s efforts to fulfill its legal obligations under the 1988 Trade Act and related issues. We examined (1) the process Treasury uses to conduct its assessments of currency manipulation and the results of recent assessments, particularly with respect to China and Japan; (2) the extent to which Treasury has met the 1988 Trade Act reporting requirements; (3) experts’ views on whether or by how much China’s currency is undervalued; and (4) the implications of a revalued Chinese currency for the United States. To determine the process Treasury uses to conduct its currency manipulation assessments and the results of recent assessments, particularly with respect to China and Japan, we reviewed the legal provisions of the 1988 Trade Act requiring Treasury to analyze foreign currency manipulation, and the act’s legislative history. We also interviewed responsible Treasury officials to better understand the assessment process. In addition, we reviewed Treasury exchange rate report findings on whether other countries are manipulating their currencies. Specifically, we examined the conditions cited in the Treasury reports that led to determination of currency manipulations for Taiwan, Korea, and China from 1988 to 1994. We also examined the changes in the economies’ conditions that led to removals of citations or, in some cases, subsequent citations for these economies; and we interviewed Treasury officials to understand Treasury’s reasoning behind its findings for China and Japan. We interviewed IMF officials to obtain information on Treasury’s consultive process with IMF. To gain a broader perspective on the economic conditions of China and Japan, we examined recent domestic and international economic data and information on those two countries’ current exchange rate regimes and practices. To determine the extent of Treasury’s compliance with reporting requirements, we reviewed all of Treasury’s exchange rate reports since 1988. We analyzed the reports and categorized our assessment of Treasury’s compliance for each of the eight reporting requirements. In addition, we interviewed Treasury officials to discuss Treasury’s recent efforts to address the requirement to assess the impact of the exchange rates on the U.S. economy. Finally, for verification, we compared statements of Treasury officials with the exchange rate reports. To obtain experts’ views on whether or by how much China’s currency is undervalued and the value’s implications for the United States, we identified studies and views of economists with expertise in the area that had been cited in congressional testimony and in other prominent policy forums, reviewed those and related studies, and interviewed a selection of experts spanning the spectrum of opinions on Chinese currency valuation. GAO economists reviewed these research papers and testimonies solely to describe the analyses and differences among them. The inclusion of the results of these studies is to show that estimates of undervaluation for China vary widely and that the analysis of the impact on the U.S. economy is complex; their inclusion does not imply that we deem them definitive. To describe and analyze country economic data and indicators used by many of these experts, we used data from the International Monetary Fund’s (IMF) World Economic Outlook and other sources, including the Bureau of Labor Statistics and the Federal Reserve Board. We also obtained foreign exchange reserve data from Global Insight and data on Japanese interventions for the 2000 to 2004 period from Japan’s Ministry of Finance. We used U.S. trade statistics compiled by the Department of Commerce’s statistical agencies to analyze the composition and trends in the U.S. merchandise trade deficit. We note that there are significant differences between U.S.–China bilateral trade data reported by the United States and that reported by China. We did not conduct an evaluation of these differences, which others have attributed to general differences in how imports and exports are valued, how the United States and China record imports and exports shipped through Hong Kong, and the quality of Chinese statistics. The reliability of Chinese statistics may also impact IMF’s statistics because much of the data used by IMF is self-reported by member countries. We determined that these data are sufficiently reliable for our purposes of presenting and analyzing trends in trade patterns and basic economic trends for China. In addition, to describe a range of views on how China might move to an alternative exchange rate value or regime, we identified several representative policy suggestions from the studies we reviewed and the experts we consulted regarding assessments of whether China’s currency is undervalued. To describe the implications of a revalued Chinese currency for the United States, we identified and reviewed studies that had been cited in congressional testimony and other policy forums, and by research institutions including the IMF. We discussed these studies with several experts spanning a range of views. To illustrate how estimates of the effects of exchange rates on U.S. manufacturing jobs depend on key assumptions, we identified assumptions from studies we reviewed and made illustrative calculations using different assumptions. These assumptions are not analytically precise, and we did not present particular estimates as being superior to others. Alternative combinations of assumptions or alternative assumptions can yield impact estimates outside the ranges presented in our analysis. The hypothetical percentages of undervaluation and assumptions are for illustrative purposes; the illustration does not imply that GAO has taken a position on the value of China’s currency or its actual impact on the U.S. economy. We also obtained data on hourly compensation costs from the Bureau of Labor Statistics to provide background for our discussion of the role of labor costs in international competitiveness. We determined that the data are sufficiently reliable for the purpose of illustrating substantial variations in labor costs across countries. However, the data are partially estimated and thus the statistics should not be considered precise measures of comparative costs and are subject to revision. For some foreign economies, the estimates are based on less than one year of data. There may also be variations in the definitions, scope, coverage, and methods used in compiling the data and in its presentation. These include the treatment of the financing of social security and the systems of taxes or subsidies. In addition, we calculated the portion of U.S. Treasury bills and corporate equities held by the two countries using the U.S. Treasury International Capital Reporting System (TIC) and the Federal Reserve Board’s Flow of Funds data to present information on China and Japan’s weight in U.S. capital markets. We used these data because they constitute the only data available for these transactions, but we note in presenting the information that because of the way the data are collected there is a bias toward overcounting flows to countries that are major financial centers and toward undercounting flows to other countries. As a result, excessive foreign holdings may be attributed to some countries that are major custodial centers, such as the United Kingdom, Switzerland, Belgium, and Luxembourg. Moreover, because the Bureau of Economic Analysis adjusts the TIC data somewhat before it reaches the Federal Reserve Board and because of timing issues, the data on total foreign holdings from the two sources have slight but insignificant differences. We determined that the data are sufficiently reliable for our purpose of illustrating whether China and Japan are major holders or purchasers of U.S. securities. We note, however, that as a result of the limitations identified, GAO calculations of the percentage of U.S. securities held by Japan and China based on the primary TIC and Federal Reserve data should be viewed as approximations. In addition to the bias detailed above, the raw transactions data (net purchases) documented in figure 5 and the associated tables in appendix VI may contain errors due to the manner in which repurchase and securities lending transactions are recorded within the TIC system. Because these transactions are known to be substantial, producers of the data note that this could produce significantly inaccurate data. Moreover, because these data include commissions and taxes associated with each transaction, the result is a slight overestimation of net purchases. These data are also revised periodically. The TIC system is the official source of this data, it is widely used by outside experts, and the limitations are not particular to any one country. Therefore we determined that they were sufficiently reliable for a comparison of net purchases of U.S. securities by China with other major purchasers and generally assessing the role of China in U.S. financial markets. However, the data must be interpreted with caution because recent transaction data may have overstated net foreign purchases of U.S. securities, especially debt instruments. To verify the reliability of most data sources, we performed several checks to test the data’s accuracy or we reviewed limitations, wherever possible. We reviewed agency or company documents related to their quality control efforts and conferred with GAO’s statistical expert for relevant data. For several sources, we tracked secondary data to the source data and reviewed other experts’ uses and judgments of that data. For several sources, we compared the raw data, or the descriptive statistics computed using the data, with equivalent statistics from other sources. We determined that the data sources we used were sufficiently reliable for the purposes of this audit. Although in many cases there were limitations, they are generally minor in the context of this report. We were unable to conduct a review of the Japanese Ministry of Finance intervention data. However, given that the Ministry of Finance is the primary and official source of these data and they are widely used by outside experts and policymakers, including the Federal Reserve Bank of New York, we have included some of the data in this report for illustrative purposes. We conducted our work from September 2003 through February 2005 in accordance with generally accepted government auditing standards. Omnibus Trade and Competitiveness Act of 1988 (Pub. L. No. 100-418, §§ 3004(b) and 3005) Sec. 3004. International Negotiations on Exchange Rate and Economic Policies. (b) Bilateral Negotiations—The Secretary of the Treasury shall analyze on an annual basis the exchange rate policies of foreign countries, in consultation with the International Monetary Fund, and consider whether countries manipulate the rate of exchange between their currency and the United States dollar for purposes of preventing effective balance of payments adjustments or gaining unfair competitive advantage in international trade. If the Secretary considers that such manipulation is occurring with respect to countries that (1) have material global current account surpluses; and (2) have significant bilateral trade surpluses with the United States, the Secretary of the Treasury shall take action to initiate negotiations with such foreign countries on an expedited basis, in the International Monetary Fund or bilaterally, for the purpose of ensuring that such countries regularly and promptly adjust the rate of exchange between their currencies and the United States dollar to permit effective balance of payments adjustments and to eliminate the unfair advantage. The Secretary shall not be required to initiate negotiations in cases where such negotiations would have a serious detrimental impact on vital national economic and security interests; in such cases, the Secretary shall inform the chairman and the ranking minority member of the Committee on Banking, Housing, and Urban Affairs of the Senate and of the Committee on Banking, Finance and Urban Affairs of the House of Representatives of his determination. Sec. 3005. Reporting Requirements. (a) Reports Required—In furtherance of the purpose of this title, the Secretary, after consultation with the Chairman of the Board, shall submit to the Committee on Banking, Finance and Urban Affairs of the House of Representatives and the Committee on Banking, Housing, and Urban Affairs of the Senate, on or before October 15 each year, a written report on international economic policy, including exchange rate policy. The Secretary shall provide a written update of developments six months after the initial report. In addition, the Secretary shall appear, if requested, before both committees to provide testimony on these reports. (b) Contents of Report—Each report submitted under subsection (a) shall contain (1) an analysis of currency market developments and the relationship between the United States dollar and the currencies of our major trade competitors; (2) an evaluation of the factors in the United States and other economies that underline conditions in the currency markets, including developments in bilateral trade and capital flows; (3) a description of currency intervention or other actions undertaken to adjust the actual exchange rate of the dollar; (4) an assessment of the impact of the exchange rate of the United States dollar on (A) the ability of the United States to maintain a more appropriate and sustainable balance in its current account and merchandise trade account; (B) production, employment, and noninflationary growth in the United States; (C) the international competitive performance of United States industries and the external indebtedness of the United States; (5) recommendations for any changes necessary in United States economic policy to attain a more appropriate and sustainable balance in the current account; (6) the results of negotiations conducted pursuant to section 3004; (7) key issues in United States policies arising from the most recent consultation requested by the International Monetary Fund under article IV of the Fund’s Articles of Agreement; and (8) a report on the size and composition of international capital flows, and the factors contributing to such flows, including, where possible, an assessment of the impact of such flows on exchange rates and trade flows. At different times during the period from 1988 to 1994, Treasury found that Taiwan, Korea, and China manipulated their currencies under the terms of the 1988 Trade Act. The conditions leading to their first citations and the changes in conditions that later led to their removal are listed below. This appendix presents an overview of recent economic conditions for China and Japan that are relevant to exchange rate policies. These include economic growth, external account balances, foreign exchange reserves, exchange rate movements, currency exchange rate regimes, and direct interventions in foreign exchange markets by national authorities. China has experienced high rates of economic growth in recent years. According to IMF- reported country data, the Chinese economy grew at annual rates of 7.1 percent to 9.6 percent during 1996 to 2004 (see fig. 5). Although economists have questioned the quality of Chinese national account statistics, there is a general consensus that the Chinese economy has grown rapidly during the past 2 years. In fact, the Chinese government has implemented policies since mid-2003 to slow economic growth because of concerns about overheating the economy. China’s economic growth has been accompanied by a large total trade volume, which was 59 percent of gross domestic product (GDP) in 2003 and 73 percent of GDP according to preliminary 2004 data. The large trade volume has been accompanied by China’s consistently positive current account balance. While China’s current account surplus declined from around 3.3 percent of (GDP) in 1998 to less than 2 percent in 1999 to 2001, it rose to 2.8 percent in 2002 after accession to the World Trade Organization and then to 3.2 percent in 2003. Preliminary data for 2004 indicated a surplus of 4.2 percent. (See fig. 6.) The Chinese government has rapidly accumulated foreign exchange reserves in recent years, which some observers have seen as evidence of currency undervaluation and manipulation. China’s total foreign exchange reserves (excluding gold and other assets at the IMF) reached $614.5 billion by the end of 2004. As figure 7 shows, this represents approximately three times the level of China’s reserves in 2001. Changes in China’s foreign exchange reserves have several components: changes in the current account balance, changes in net flows of foreign direct investment (FDI), changes in net non-FDI flows, and undocumented capital—or errors and omissions. Both China’s current account surplus and net FDI inflows were major components of the reserve increases from 2001 through 2003. (See table 4.) In addition, changes in non-FDI net inflows (defined as portfolio investment and other investment) and errors and omissions have also been important to the reserve increases. These components had been strongly negative—meaning significantly greater outflows than inflows—in 1999 and 2000, which had worked to dampen China’s reserve accumulation. However, the balance changed and in 2003 non-FDI flows and errors and omissions were strongly positive. One reason for the increase in these inflows into China is large speculative inflows that may be driven by expectations of an upward revaluation of the renminbi. The basic relationship between China’s current account balance and capital and financial account flows is also depicted in table 4. For 2003, the last year for which complete data is available, China had a current account surplus of $45.9 billion accompanied by a capital account surplus of $52.8 billion. Maintaining large surpluses in both current and capital accounts is relatively unusual compared to other countries. For example, the United States has had in recent years a current account deficit financed by a capital account surplus; that is, the United States borrows from foreigners to purchase goods. Japan, in contrast, has generally had in recent years a current account surplus and a deficit in its capital account, including a net outflow of FDI. China’s net capital inflow in 2003 was predominantly in the form of direct investment. This is in part because China has a relatively open door policy on FDI but restricts other forms of foreign investment. China has, since the fall of 1994, had a de facto fixed exchange rate regime, as classified by the IMF, with its exchange rate pegged to the dollar (see fig. 8). Prior to that point, China maintained a dual exchange rate regime with an official fixed rate and market-negotiated rates. The official fixed rate was devalued several times before it was unified with the prevailing market rate in early 1994, and the exchange rate regime was officially changed to a managed float. The renminbi began to appreciate slightly (to 8.3 renminbi per U.S. dollar) soon after the unification, mainly due to export growth caused by a wave of foreign direct investment. Chinese authorities decided to hold the rate within a small band of 0.25 percent. By 1998, the exchange rate had been allowed to appreciate slightly to 8.28 renminbi per U.S. dollar, with a narrow band, where it has stayed until the present. Between 1986 and 1994, China had a dual exchange rate regime in which the official fixed exchange rate coexisted with the market-negotiated rates in Foreign Exchange Adjustment Centers (also called swap centers). The official rate applied to trade transactions and other activities that were controlled by state planning. Market rates, which were significantly lower than the official rate, suggesting overvaluation of the official rate, applied to all other activities. By 1993, the official rate was 5.7 renminbi per U.S. dollar and the market rate was 8.7 renminbi per U.S. dollar. It is the real effective exchange rate that affects Chinese products’ trade competitiveness. Although the nominal exchange rate of Chinese currency has remained relatively stable since 1994, the real effective exchange rate of Chinese currency has shown variations since 1994 (see fig. 9). The variation is parallel to that of the U.S. dollar because the renminbi has been pegged to dollar. Chinese authorities keep controls on foreign exchange earned from exports and other current account activities through “repatriation and surrender requirements” on foreign exchange proceeds. Under these controls, some exporters must sell a significant portion of their previous year’s foreign exchange earnings to authorized banks at a fixed rate for China’s currency. China also maintains controls on the use of foreign currencies related to imports and other outward flows for investment purposes. For instance, importers must provide proof of import needs and commercial bills to obtain foreign currencies. Overall, these measures are less restrictive than those in place in the early 1990s. In addition to controls related to current account transactions, other restrictions continue to apply to most capital transactions. For instance, only certain qualified foreign institutional investors can bring in foreign capital to invest in the segment of Chinese domestic security markets denominated in renminbi. Foreign entities can purchase securities denominated in U.S. dollars more freely. China maintains an “open door” policy with respect to inbound FDI, but outward investment is limited and requires government approval. Chinese purchases of capital and money market instruments abroad are restricted to selected institutions and enterprises. In 2004, China eased some restrictions on outward capital flows, including allowing domestic insurance firms to invest a portion of their portfolios offshore and permitting multinational companies to transfer foreign exchange among subsidiaries. Japan suffered from recession and deflation in the years immediately following the 1997 to 1998 Asian financial crisis (see fig. 10). Its economy recovered briefly with a 2.8 percent annual growth rate in 2000, declined in 2001, and stagnated in 2002 before picking up again in 2003. Despite inconsistent growth, Japan has maintained a consistent current account surplus, which fluctuated between 2.1 percent and 3.6 percent of GDP during 1998 to 2004 (see fig. 11). Nevertheless, Japan’s trade volume as a percentage of GDP was 18 percent in 2003 and 20 percent according to preliminary 2004 data, both of which were less than one-third that of China for the same years. Japan’s total foreign exchange reserves increased from $215.5 billion in 1998 to $663.3 billion in 2003 and $833.9 billion in 2004 (see fig. 12). The rapid increase reflected a reversal of net capital flow direction—from a net outflow to a net inflow. The rapid accumulation of foreign exchange reserves in 2003 is attributable to an increase in non-FDI capital inflows. This increase was due to an equity market rally caused primarily by Japan’s economic recovery, an increase in the Japanese interest rate in the summer of 2003, and market anticipation of further yen appreciation. In contrast to China, Japan has had a steady FDI outflow over time. It ranged from $23 billion to $32 billion from 2000 to 2003. The Japanese yen is on an independent float, with the exchange rate primarily determined by market forces. Japanese authorities have periodically carried out large interventions in the foreign exchange market through the sale of yen in exchange for U.S. dollars, resulting in slower yen appreciation. Japanese authorities intervened frequently in its foreign exchange markets in 2002, increased the frequency and magnitude of interventions in 2003, and continued interventions into early 2004 (see fig. 13). U.S. Treasury officials told us they did not think such interventions led to lasting effects on the yen exchange rate. Since 2003 Treasury has reported that it actively engages Japanese authorities to urge greater exchange rate flexibility. The yen’s real effective exchange rate has fluctuated over the past decade (see fig. 14). Some market appreciation pressure on the nominal value of the yen during this period was due to larger capital inflows, particularly a large inflow from Europe in 1999 and another large inflow in 2003 due to prospects of higher stock market prices. Strong inflows continued into early 2004. Economists use various methods to analyze whether exchange rates are misaligned. In general, determining whether a country’s currency is under- or overvalued involves first determining the country’s equilibrium exchange rate as a reference or baseline. This is complex because estimating the equilibrium exchange rate requires information on what value the exchange rate would attain if it were consistent with a country’s economic fundamentals at a particular point in time. Different approaches to estimating equilibrium exchange rates and under- and overvaluation can yield widely varying results, especially for developing countries, and even similar approaches can result in different outcomes depending upon which assumptions and economic judgments are used. Thus, estimates of undervaluation for China vary substantially—from 0 to 56 percent. This appendix outlines some of the methodologies commonly used to estimate the extent of undervaluation of the renminbi. One methodology commonly used to define equilibrium exchange rates and determine if a currency is under- or overvalued is the Purchasing Power Parity (PPP) approach. The PPP approach is rooted in the law of one price, which states that identical goods in different countries should trade at the same price. Thus, the equilibrium exchange rate is defined as the exchange rate at which the general level of prices will be the same in every country and is calculated as the ratio of the domestic and foreign price levels. The goods and services analyzed are typically those that make up the GDP of each country. In some cases, narrower units have formed the basis of PPP comparisons, such as the “Big Mac” index which is a widely cited shortcut version that analyzes one standardized good across countries. Unfortunately, the law of one price has limitations; it does not hold across nations of sharply differing levels of development and is biased toward finding undervaluation for low-income countries compared to their higher- income counterparts. Additionally, the approach ignores other important factors that lead to inequality in prices, such as trade barriers and nontraded goods. Many experts maintain that PPP measures are more useful for analyzing cost-of-living differences than inferring the extent of currency misalignment. A variation of the absolute PPP approach discussed above is the relative version of the PPP methodology, which is based on the hypothesis that changes in the exchange rate are determined by the difference between inflation rates in the two countries—or, equivalently, the real exchange rate between two currencies remains constant over time. The technique involves choosing a point in time that corresponds to equilibrium and then projecting the new equilibrium rate using the inflation differentials between countries. This analysis is based on trade-weighted exchange rate indexes because they are better indicators of overall competitiveness. One limitation of the approach is that it is very sensitive to the type of price index used for base calculations (e.g., the consumer price index vs. the producer price index), and the results depend on the time periods selected as the base year. The methodology also ignores structural changes in the economy that might cause the real exchange rate to change over time. The FEER approach to assessing currency valuation is based on the relationship between the current account and capital flows. The FEER is defined as the exchange rate that will bring the current account balance (consistent with domestic full employment) into equality with the “normal” or sustainable capital account balance. Thus, it is the value of the exchange rate that is consistent with both internal and external economic equilibrium. The FEER calculation requires macroeconomic or trade models to obtain the current account position that is consistent with internal balance, known as the “trend” current account. The second stage involves determining the real exchange rate changes necessary to ensure balance between medium-term capital flows and the trend current account. Within this framework, the equilibrium exchange rate is deemed “fundamental” in the sense that it is related to the fundamental economic determinants over the medium term. Significant limitations of this approach are that it requires extensive modeling to capture the major trade relationships and economic judgments that are criticized by some as ad hoc (including a decision about “normal” or sustainable capital flow levels) and that it relies on estimates of the sensitivity of demand to prices that are difficult to make. In addition, changes in the structure of the economy that affect the current account and the equilibrium exchange rate may introduce further uncertainty in the estimates. This is important in China’s case because many economic conditions and institutions are rapidly changing in the move toward a market-based economy. Also, this approach is difficult to apply to China because of limitations in the quality of Chinese statistics. This methodology is based on the premise that there is an appropriate current account position (external balance) associated with the equilibrium savings and investment balance within a country (internal balance). Once the full employment savings-investment position is established and its associated current account is determined, this approach uses estimated trade models to determine how much the real exchange rate would have to change to generate the required external balance. The approach is related to the FEER concept because the equilibrium exchange rate is associated with internal and external economic balances. Similar to the FEER, this methodology also requires considerable modeling and economic judgment, and the results are highly sensitive to variations in key parameters. The IMF notes that in its macroeconomic balance modeling approach assumptions are used to assess the current account positions and exchange rates that may not be entirely appropriate for developing countries. Moreover, the IMF industrial country methodology largely abstracts from the impact that structural policies and adjustments could have on the equilibrium savings investment position. Again, this is important in China’s case because of the many structural adjustments the country is currently undergoing. Similar to the FEER and Macroeconomic Balance approaches, this method is based on the premise that there is an appropriate external account position. That is, there is a particular level of the current account that balances the “normal” capital flows so that there is no change in international reserves. It differs from these two approaches in that it does not consider internal equilibrium. This approach involves determining the sustainable external account balance—meaning one appropriate for a country’s economic situation. Once the relevant external balance is identified, estimated trade models or rule-of-thumb relationships are used to determine the exchange rate change needed to generate the target outcome. This method is highly dependent upon which portion of China’s external balances is considered. For example, the selection of China’s current account balance might lead to a finding that the renminbi is not significantly undervalued, while the broader basic balance might lead to a finding of substantial undervaluation. The approach also relies on elasticities that are difficult to estimate or rules of thumb that are not analytically precise. Moreover, the approach does not include an explicit consideration of a country’s internal economic equilibrium situation, such as whether the country is at full employment. Under this approach, equilibrium exchange rates are determined through observing long-run relationships between real exchange rates and the economic variables that determine them. That is, the BEER approach uses econometric relationships to model the equilibrium exchange rate, based on predicted economic relationships derived from an array of relevant theories. Misalignment of a currency is measured as the difference between the actual exchange rate and that predicted by the model variables. However, the determinants of exchange rates and their links to any underlying notion of economic fundamentals are neither well understood nor easily predicted. Thus, many complex BEER models do not predict exchange rates any better than simpler techniques. The BEER approach also uses a number of simplifying assumptions and precludes the identification of many other key parameters important to explaining the economic system. This makes it difficult to judge the plausibility of its estimates. Qualitative Approaches Some analysts do not formally define an equilibrium exchange rate, but look at trends in certain data to determine whether or not a country’s currency is misaligned. One of the most widely cited trends used to infer currency misalignment is foreign exchange reserve growth. Some observers have noted that China has been accumulating reserves at a rapid pace and conclude that the renminbi must be undervalued. While it is true that China’s foreign exchange growth has outpaced all other countries, with the exception of Japan (see fig. 15), using China’s reserve accumulations as a measure of currency misalignment has limitations. For example, some analysts have noted that a significant portion of the capital inflow into China has been short-term speculative money, triggered by expectations of a renminbi appreciation. Given China’s commitment to a fixed exchange rate regime, the government must absorb this excess foreign exchange. Moreover, if China removes restrictions on capital account transactions, as many have been advocating, some analysts believe the currency may depreciate due to capital outflow. Thus, while rapid reserve growth indicates upward pressure on the currency, it does not necessarily suggest by itself that the current value of the renminbi is lower than its long-run equilibrium value. An undervalued currency relative to the dollar would tend to make U.S. exports more expensive and U.S. imports less expensive. However, just how much cheaper imports would be and the degree of impact on the U.S. trade deficit, production, and employment would ultimately depend on complex factors. This appendix discusses some of these important factors. The impact of China’s currency on the U.S. economy would first depend on a number of factors that can weaken the exchange rate pass through—that is, the extent to which a change in the value of China’s currency changes the price of exports to the United States. These include: The import content of Chinese exports to the United States. A large portion of China’s export operations consists of the final assembly of products using components produced in other countries, especially Japan, Korea, and Taiwan. Some experts believe that the import content of Chinese exports to the United States may be 35 to 40 percent of the total value, and others have estimated as much as 80 percent. An appreciation of the renminbi could thus have limited impact on the prices of these exports to the United States because the currency change would leave the imported portions of the products (as much as 80 percent) unaffected, while a smaller portion (20 percent) would become more expensive. The flexibility of the Chinese labor market. Some researchers believe that Chinese laborers might willingly take wage cuts to keep their jobs given the high unemployment rate in the country. Thus, the extent to which an increase in the value China’s currency increases the price of exports to the United States would depend on whether a revaluation of the renminbi leads to lower wages. The response of foreign-invested enterprises (multinational companies operating in China). The response of import prices to the exchange rate would also be smaller if foreign producers absorb the exchange rate movements in their profit margins to sustain their U.S. market share. According to Chinese statistics, foreign firms, some of them U.S.-owned, produced more than 50 percent of all exports in 2002 and accounted for 65 percent of the total increase in Chinese exports from 1994 to mid-2003. Once the impact on import prices is determined, the impact on trade flows, production, and the U.S. economy would still depend on additional factors. Elasticity of demand. The sensitivity of U.S. demand for Chinese goods and of China’s demand for U.S. goods to price changes are also important factors. If U.S. consumers are sensitive to price changes of Chinese imports (i.e., elasticity of import demand is high), then an increase in import prices would significantly reduce the demand for Chinese goods and improve the bilateral trade deficit with China. Similarly, if the Chinese elasticity of demand for U.S. goods is low, an appreciation of the renminbi may not result in an increase in the demand for the cheaper U.S. products. China’s weight in the U.S.’s overall trade. The trade-weighted dollar is a measure of the dollar’s value with respect to its major trading partners. Such indexes are useful for discussion of the relationship between exchange rates and the aggregate trade balance. According to the Federal Reserve Board, the renminbi carries a weight of approximately 10 percent in the trade-weighted real effective exchange rate (see fig. 16). Therefore, a 20 percent change in the value of the renminbi means the Federal Reserves’ trade-weighted dollar would change by roughly 2 percent. Thus, some maintain that a revaluation of the renminbi must be accompanied by an increase in the value of other currencies to have a significant impact on the United States’ global trade deficit. How countries react to China’s exchange rate policies. Some analysts contend that China’s currency peg to the dollar induces other East Asian countries to intervene in currency markets to keep their currencies weak against the dollar so that they can remain competitive with China, thus magnifying the impact of China’s currency on the United States. Moreover, they conclude that a revaluation by China would encourage other countries to follow. As a result, there could be a large enough change in the trade-weighted dollar to impact the United States’ global trade deficit. Labor-intensive tasks once performed in other countries are now being performed in China. As figure 17 shows, while the portion of the U.S. merchandise trade deficit accounted for by Japan and the rest of East Asia has fallen since 1999, China’s share has risen. This reflects the fact that exports from Japan and other East Asian countries to the United States are now increasingly finished and exported from China. For example, from 2000 to 2002, U.S. imports from China increased by $25.2 billion, while imports from Japan fell $24.5 billion. The extent to which Chinese exports to the United States are substituting for exports that would otherwise have entered the United States from alternative low-cost countries makes the impact on the U.S. economy difficult to quantify. The role of cheap labor. Many believe that China competes primarily in terms of low labor costs. There are also a number of other countries whose manufacturing wages are only a fraction of those in the United States (see fig. 18). As a result, some believe a renminbi appreciation would not induce increased output in American factories. Instead, U.S. imports from other low-wage foreign suppliers would increase. If this is true, the bilateral trade deficit with China would decrease, but the trade deficits with other low-wage countries would increase, leaving the overall trade deficit unchanged (or slightly worse due to more expensive imports). Degree of competition. The effects of the exchange rate are stronger when countries compete in similar markets. Some researchers maintain that the overlap between the production of China and the United States is small; that is, relatively few imports from China compete with domestic production in the United States. Others believe that the market competition is high enough that Chinese imports have displaced U.S. workers. Lastly, potential income effects on China and economic interdependence between major trading partners are relevant to exchange rate impacts. For example, some experts have concluded that an appreciation of the renminbi would reduce employment, income, and growth in China, thereby affecting Chinese demand for U.S. exports. Similar forces must be considered for the United States, although it is unclear whether they would be significant given the distinct effects on the various sectors of the economy. Some believe that an appreciation of the renminbi (especially if accompanied by the elimination of capital restrictions) would lead to economic and financial instability in China and jeopardize other Asian countries that rely in part on exports to China to sustain their economies. Such instability in East Asia, if it were to occur, would likely have negative repercussions on the U.S. and global economies. China has in recent years purchased substantial amounts of U.S. securities, mostly agency bonds and U.S. Treasury securities (see table 5). However, China’s net purchases are not as large as those of the United Kingdom and Japan. Like other foreign central banks, China’s central bank has chosen to purchase large quantities of U.S. Treasury securities with renminbi in part because it can buy and sell them quickly with minimal market impact. According to monthly data compiled by the Treasury International Capital System, China’s investment in U.S securities climbed sharply during the 2000 to 2003 period, but was lower in 2004. This appendix presents detailed tables on foreign transactions in U.S. securities. While these transactions data are useful for showing China’s relative size in overall securities purchases, they have certain reliability limitations which are noted in the table and are further discussed in appendix 1. In addition to the persons named above, Lawrance Evans, Jr., Jane-yu Li, Jamie McDonald, Donald Morrison, and Richard Seldin made major contributions to this report. | The 1988 Trade Act requires the Department of the Treasury to annually assess whether countries manipulate their currencies for trade advantage and to report semiannually on specific aspects of exchange rate policy. Some observers have been concerned that China and Japan may have maintained undervalued currencies, with adverse U.S. impacts, which has brought increased attention to Treasury's assessments. In 2004, Congress mandated that Treasury provide additional information about currency manipulation assessments, and Treasury issued its report in March 2005. Members of Congress have continued to propose legislation to address China currency issues. We examined (1) Treasury's process for conducting its assessments and recent results, particularly for China and Japan; (2) the extent to which Treasury has met legislative reporting requirements; (3) experts' views on whether or by how much China's currency is undervalued; and (4) the implications of a revaluation of China's currency for the United States. In commenting on a draft of this report, Treasury emphasized it does consider the impact of the exchange rate on the economy, and factors influencing exchange rates also affect U.S. production and competitiveness. Treasury has not found currency manipulation under the terms of the 1988 Trade Act since it last cited China in 1994. Treasury officials make a positive finding of currency manipulation only when all the conditions in the Trade Act are satisfied--when an economy has a material global current account surplus and a significant bilateral trade surplus with the United States, and is manipulating its currency with the intent to gain an unfair trade advantage. Treasury said that in its 2003 and 2004 assessments, China did not meet the criteria for manipulation, in part because it did not have a material global current account surplus and had maintained a fixed exchange rate regime through different economic conditions. Japan did not meet the criteria in 2003 and 2004 in part because its exchange rate interventions were considered to be part of a macroeconomic policy to combat deflation. Treasury has generally complied with the reporting requirements for its exchange rate reports, although its discussion of U.S. economic impacts has become less specific over time. Recent reports stress the importance of broad macroeconomic and structural factors behind global trade imbalances, which Treasury officials contend meets the intent of economic impact requirements. Many experts have concluded that China's currency is undervalued, but by widely varying amounts, while some maintain that undervaluation cannot be determined. The significant variation in estimates can be attributed in part to different methodological approaches, but experts also believe that exchange rate assessments are especially challenging for rapidly developing economies such as China's. Among experts who believe China's currency is undervalued, views on policy steps to correct the imbalance differ. A revaluation of China's currency could have implications for various aspects of the U.S. economy, although the impacts are hard to predict. They depend on multiple factors, including how much appreciation is passed through to higher prices for U.S. purchasers and the extent to which reduced imports from China are replaced with imports from other countries. In addition to affecting trade-related sectors, a revaluation could have implications for U.S. capital flows. |
The FBI was founded in 1908 to serve as the primary investigative unit of the Department of Justice. Its missions include protecting the nation from foreign intelligence and terrorist threats, investigating serious federal crimes, providing leadership and assistance to law enforcement agencies, and being responsive to the public in the performance of these duties. Approximately 12,000 special agents and 16,000 mission support personnel are located in the bureau’s Washington, D.C., headquarters and in more than 450 offices in the United States and 45 offices in foreign countries. Mission responsibilities at the bureau are divided among the following five major organizational components: Counterterrorism and Counterintelligence: identifies, assesses, investigates, and responds to national security threats. Intelligence: collects, analyzes, and disseminates information on evolving threats to the United States. Criminal Investigations: investigates serious federal crimes and probes federal statutory violations involving exploitation of the Internet and computer systems. Law Enforcement Services: provides law enforcement information and forensic services to federal, state, local, and international agencies. Administration: manages the bureau’s personnel program, budgetary and financial services, records, information resources, and information security. Each component is headed by an Executive Assistant Director who reports to the Deputy Director, who, in turn, reports to the Director. The components are further organized into 19 subcomponents, such as divisions, offices, and groups. Supporting these subcomponents are various staff offices, including the Office of the CIO. Figure 1 shows a simplified organizational chart of the components, subcomponents, Office of the CIO, and their respective reporting relationships. The Office of the CIO’s responsibilities include preparing the bureau’s IT strategic plan and operating budget; operating and maintaining existing systems and networks; developing and deploying new systems; defining and implementing IT management policies, procedures, and processes; and developing and maintaining the bureau’s EA. To carry out these responsibilities, the Office of the CIO is organized into four subordinate offices. Figure 2 shows a simplified organizational chart of the CIO’s office, subordinate offices, and their reporting relationships; a brief description of each office’s responsibilities is in table 1. The FBI’s EA program is in the CIO’s Office of IT Policy and Planning. To execute its mission responsibilities, the FBI has historically relied extensively on IT. For example, it relies on such computerized IT systems as the Combined DNA Index System to support forensic examinations and the National Crime Information Center and the Integrated Automated Fingerprint Identification System to help state and local law enforcement agencies identify criminals. The FBI reports that it collectively manages hundreds of systems, networks, databases, applications, and associated IT tools. As we previously reported, the FBI’s IT environment includes outdated, nonintegrated systems that do not optimally support mission operations. Following the terrorist attacks of September 11, 2001, the FBI was forced to rethink its mission. As we have reported, this resulted in the bureau shifting its mission focus to detecting and preventing possible future attacks and ultimately led to the FBI’s commitment to reorganize and transform itself. According to the bureau, the complexity of this mission shift, along with the changing law enforcement environment, has strained its existing patchwork of IT systems, which were developed and deployed on an ad hoc basis. The bureau reports that these circumstances will require a major overhaul in its IT systems environment. To effect this change, the FBI has undertaken an organizational transformation and systems modernization effort. Major goals of the transformation are, among other things, to develop the capability to become a proactive rather than a reactive organization, embrace intelligence as a professional and operational competency, and leverage information across the bureau and with other agencies to “connect the dots.”According to the FBI, an integral part of the transformation will be modernizing the IT systems that support the bureau’s processes. The FBI reports that it will spend approximately $390 million on modernization projects in fiscal year 2005 out of a total IT budget of $737 million. To guide and constrain these and future system modernization investments, the FBI has initiated an effort to align its investments with the new mission being implemented via its transformation. The FBI has stated that a foundational element of this effort is a bureauwide EA. Effective use of EAs, or modernization blueprints, is a trademark of successful public and private organizations. For more than a decade, we have promoted the use of architectures to guide and constrain system modernizations, recognizing them as a crucial means to a challenging goal: agency operational structures that are optimally defined in both business and technological environments. The Congress, the Office of Management and Budget (OMB), and the federal CIO Council have also recognized the importance of an architecture-centric approach to modernization. The Clinger-Cohen Act of 1996 mandates that agency CIOs develop, maintain, and facilitate the implementation of an IT architecture. Further, the E-Government Act of 2002 requires OMB to oversee EA development within and across agencies. An EA is a systematically derived snapshot—in useful models, diagrams, and narrative—of a given entity’s operations (business and systems), including how its operations are performed, what information and technology are used to perform the operations, where the operations are performed, who performs them, and when and why they are performed. The architecture describes the entity in both logical terms (e.g., interrelated functions, information needs and flows, work locations, systems, and applications) and technical terms (e.g., hardware, software, data, communications, and security). EAs provide these perspectives both for the entity’s current (or “as is”) environment and for its target (or “to be”) environment; they also provide a high-level capital investment roadmap for moving from one environment to the other. In doing so, EAs link organizations’ strategic plans with program implementations. Among others, OMB, the National Institute of Standards and Technology, and the federal CIO Council have issued frameworks that define the scope and content of architectures. In addition, OMB has since issued a collection of five reference models (Business, Performance, Data/Information, Service, and Technical) that are intended to facilitate governmentwide improvement through cross-agency analysis and the identification of duplicative investments, gaps, and opportunities. While these various frameworks differ in their nomenclatures and modeling approaches, they consistently provide for defining an architecture’s operations in both logical and technical terms and providing these perspectives for both the “as is” and the “to be” environments, as well as the investment roadmap. Managed properly, an EA can clarify and help to optimize the interdependencies and relationships among an organization’s business operations and the underlying IT infrastructure and applications that support these operations. Employed in concert with other important management controls, such as portfolio-based capital planning and investment control practices, architectures can greatly increase the chances that an organization’s operational and IT environments will be configured to optimize its mission performance. Our experience with federal agencies has shown that making IT investments without defining these investments in the context of an architecture often results in systems that are duplicative, not well integrated, and unnecessarily costly to maintain and interface. According to guidance published by the federal CIO Council, effective architecture management consists of a number of key practices and conditions. In April 2003, we published a maturity framework that arranges key best practices and conditions of the federal CIO Council’s guide into five hierarchical stages, with Stage 1 representing the least mature and Stage 5 being the most mature. The framework provides an explicit benchmark for gauging the effectiveness of EA management and provides a roadmap for making improvements. Each of the five stages is described below, and the stages and their core elements are shown in table 2. (See app. II for a more detailed description of our framework and associated core elements.) 1. Creating EA awareness. The organization does not have plans to develop and use an architecture, or it has plans that do not demonstrate an awareness of the value of having and using an architecture. While Stage 1 agencies may have initiated some architecture activity, these agencies’ efforts are ad hoc and unstructured, lack institutional leadership and direction, and do not provide the management foundation necessary for successful architecture development. 2. Building the EA management foundation. The organization recognizes that the architecture is a corporate asset by vesting accountability for it in an executive body that represents the entire enterprise. At this stage, an organization assigns architecture management roles and responsibilities and establishes plans for developing architecture products and for measuring program progress and product quality; it also commits the resources necessary for developing an architecture—people, processes, and tools. 3. Developing the EA. The organization focuses on developing architecture products according to the selected framework, methodology, tool, and established management plans. Roles and responsibilities assigned in the previous stage are in place, and resources are being applied to develop actual architecture products. The scope of the architecture has been defined to encompass the entire enterprise, whether organization based or function based. 4. Completing the EA. The organization has completed its architecture products—meaning that the products have been approved by the architecture steering committee or an investment review board and by the CIO. Further, an independent agent has assessed the quality (i.e., completeness and accuracy) of the architecture products. Additionally, evolution of the approved products is governed by a written architecture maintenance policy approved by the head of the organization. 5. Leveraging the EA to manage change. The organization has secured senior leadership approval of the architecture products and has a written institutional policy stating that IT investments must comply with the architecture, unless granted an explicit compliance waiver. Further, decision makers are using the architecture to identify and address ongoing and proposed IT investments that are conflicting, overlapping, not strategically linked, or redundant. Also, the organization tracks and measures architecture benefits or return on investment, and adjustments are continuously made to both the architecture management process and the architecture products. Over the past several years, reviews of the FBI’s efforts to leverage IT to support its transformation have identified the bureau’s lack of an EA as a significant management weakness. For example, during 2002, we reported that the FBI did not have an EA. Because our research and experience at federal agencies shows that architectures are an essential ingredient to success for transformations like the FBI’s, we reported that the bureau should establish the management foundation that is necessary to begin successfully developing, implementing, and maintaining an EA. Between September 2003 and September 2004, we reported on a number of FBI IT transformation challenges, including effectively developing and using an architecture. More specifically, we reported in September 2003 that the bureau had not yet acted on our recommendation for an EA, having only established 1 of the 31 key EA management capabilities described in our architecture management maturity framework, and that this limited capability was due in part to the fact that the architecture’s development was not being treated as an agency priority. Accordingly, we recommended that the Director make architecture development and use a priority, and we provided additional recommendations to help the bureau establish the management capabilities needed to develop, implement, and maintain its architecture. The FBI agreed with our recommendations. Since we reported on the FBI’s lack of an architecture, others have similarly reported on this gap in the bureau’s ability to effectively modernize its systems and transform its operations. For example, in March 2004, the Department of Justice Inspector General testified that the lack of an architecture was a contributing factor to the continuing cost and schedule shortfalls being experienced by the bureau on its Trilogy investigative case management system, which was the FBI’s centerpiece systems modernization project. Moreover, the National Research Council reported in May 2004 that while the bureau had made significant progress in its IT systems modernization program, the FBI was not on the path to success, in part, because it had not yet developed an EA. The FBI initiated its current effort to develop an architecture in late 2003. For example, in March 2004, the bureau awarded a $1.2 million firm, fixed- price contract for assistance in developing, maintaining, and implementing an EA. It subsequently awarded the same contractor two fixed-price contracts to provide EA security and integration services. Although these contracts are supporting the Office of the CIO, responsibility for contract management resides with the Office of the Chief Financial Officer. As we previously reported, it is critical that the FBI have and use a well- defined EA to guide and constrain its IT investment decisions. We recommended that in order to effectively develop and implement an architecture, the bureau employ rigorous and disciplined architecture management practices. Such practices form the basis of our architecture management maturity framework. The bureau has thus far implemented most of our framework’s key practices associated with establishing an architecture management foundation, but important foundational practices are still missing. It has also implemented key practices related to developing the architecture; however, most architecture development practices are not yet fully implemented, and virtually all practices that are key to completing and leveraging the architecture for organizational change remain to be implemented. While the bureau’s EA efforts to date represent important progress from where it was in 2003, when we last assessed its efforts, much remains to be accomplished before the FBI will have an effective EA program. Without such a program, the bureau will be challenged in its efforts to effectively and efficiently modernize its systems in a way that minimizes duplication and overlap, maximizes integration, and effectively supports organizational transformation. In March 2005, the FBI completed an EA baseline report on the status of its “as is” EA activities. The purpose of the report was to, among other things, provide a “high-level snapshot” of where it stood in determining and understanding current bureau business processes and supporting IT structures and systems and how it was managing its ongoing architecture development efforts. In May 2005, the bureau issued a similar report on its “to be” architecture activities. On the basis of these reports, along with other documentation and officials’ statements, we determined that the bureau has satisfied 15 of the 31 core elements specified in our architecture management maturity framework, including 7 Stage 2 elements, all Stage 3 elements, 1 Stage 4 element, and 1 Stage 5 element (see table 3). For the remaining elements, the bureau has efforts planned and under way that are intended to satisfy them. More specifically, for Stage 2, the bureau has satisfied seven of nine core elements. For example, in early 2004, the bureau established a program office—located in the CIO’s office and headed by a senior level executive— that is responsible for EA development and maintenance, including drafting and executing a program management plan. This program office includes a chief architect and five key senior level architect positions for business, applications, information, technology, and security. The office also has positions that are to perform support functions such as quality assurance, risk management, and configuration control. The bureau also established an Enterprise Architecture Board that includes senior representation from across all bureau business areas and has assigned the board responsibility for directing, overseeing, and approving the architecture. Minutes of board meetings show that this organization meets about every 2 weeks to oversee EA program progress, provide executive direction, and review and approve EA plans and products. These minutes also show that CIO officials and business area representatives regularly attend the meetings. In addition, the bureau has developed a number of plans, including a program management plan (dated October 2004). According to these plans, the architecture is to describe the “as is” and “to be” environments, as well as a sequencing plan. Moreover, the plans call for describing the enterprise in terms of business, performance, data, application, and technology. These plans also include a schedule of tasks to be performed, associated milestones, and an estimate of resources (e.g., funding, staffing, contractor assistance) for fiscal years 2004 through 2007. In addition, these plans call for developing performance metrics to measure EA development and execution and provide for establishing management controls, such as risk management, quality assurance, and configuration control, for developing and maintaining the architecture. Other Stage 2 core elements have yet to be fully addressed. For example, the EA program office does not yet have adequate resources. According to the framework, an organization should have the resources (e.g., funding, human capital) to establish and effectively manage its architecture. According to FBI officials, they have adequate financial resources to fund the program and sufficient contractor assistance, and they have been able to use bureau and contractor personnel to staff most of the 13 program office staff positions. However, core staff positions identified by the bureau have not yet been filled: four of the five key architect positions mentioned earlier are vacant. Bureau officials told us that job announcements have been issued for the four key architect positions, but it has been a challenge finding the right candidates. According to the FBI, failing to have these key staff on board hampers the program office’s ability to perform planned tasks. Having qualified staff serving as the core team is important because without them, the program office does not have the proper knowledge, skills, and abilities to properly execute the EA program, including managing and overseeing its contractors. In addition, although the FBI has selected a framework to determine the type of architecture products to be developed and has acquired an automated tool to capture the content of its products, the bureau does not have a defined methodology (i.e., the specific steps and methods) documenting how it will develop the products’ content. As stated in our framework, a methodology is important because it defines (and thus permits stakeholders and others to understand) the steps necessary to perform the activities associated with capturing EA content in a coherent, consistent, accountable, and repeatable manner. For this reason, our architecture maturity framework calls for using a methodology in conjunction with an EA framework and automated tool. Collectively, these permit architecture development to occur in an effective and efficient manner. Instead of a defined methodology, the bureau is relying on a combination of its chief architect’s knowledge and certain documentation, such as an EA alignment plan that describes, among other things, the products to be developed, the order in which they are to be developed, the relationships among products, and analyses that are to be performed to help identify gaps and redundancies in the contents of these products. However, this documentation does not include either the specific steps or methods that explain how the content of products is to be developed and documented. It is important to have a documented methodology that is available to and understood by those engaged in providing EA product content, because without one, there is increased risk that products will be inconsistent, incomplete, and incorrect, and thus require rework. For Stage 3, the bureau has satisfied all six core elements. In particular, the bureau issued a policy in August 2003 that defines, among other things, the scope of the architecture and identifies the major stakeholders, including their roles and responsibilities. In addition, the bureau has developed a configuration management plan that defines management structures and processes for identifying, tracking, monitoring, reporting, and auditing changes to the architecture products. The plan establishes a configuration control board and makes the security architect responsible for initiating board meetings and ensuring that audits are conducted as intended. To date, this board has identified and begun tracking such changes. For example, products, including the program management plan, EA principles, “as is” architecture, and EA software tool, have been identified and placed under configuration management in accordance with the plan. Further, the program office reports that it is in the process of developing its “as is” architecture. According to the March 2005 report, the bureau has issued several iterations of a “high-level” version of its “as is” architecture that describes the bureau’s business, data, application, and technology environments. However, these iterations do not include performance descriptions. Moreover, the other “as is” descriptions are not complete, according to the report. For example, as part of the information/data description, the program office is in the process of completing ongoing efforts to map FBI data to the business processes that use these data. In addition, as part of the application description, the program office is working to develop a system architecture diagram to show how the various IT applications currently interrelate. Also, while the program office has developed a business architecture description, it has not performed a detailed decomposition of the business processes described. The bureau had planned to complete the remaining work on the “as is” architecture by mid-summer. The office also is in the process of developing the “to be” architecture. According to the FBI’s May 2005 report, the initial version of the “to be” architecture includes business, performance, information/data, service, and technology descriptions. However, the report identifies additional work needed to complete this version. For example, according to the report, the service reference models need to be further defined to provide a detailed framework that supports the transition to the “to be” environment. In addition, the bureau reports that data exchange models need to be developed to provide better understanding of data exchange processes and whether opportunities exist for improvement. Further, the bureau reports that it needs to develop a framework so that it can better understand the relationships among EA components, such as between the business reference model and the service reference model, and between the service reference model and the technology reference model. The bureau plans to issue the next version of its “to be” architecture in fiscal year 2006. In addition, the bureau reports it has developed a “high level” description of a sequencing plan that is not yet complete; the next version of the plan is scheduled for issuance in September 2005. Two additional elements (one Stage 4 and one Stage 5 element) have also been satisfied. Specifically, while EA products and processes to date have not been independently verified and validated, the FBI hired a contractor in April 2005 to begin performing such assessments on both the EA products and the processes used to develop them. According to the contract statement of work, the results of these assessments are to be shared with the program office and reported to the steering committee. Also, the bureau has defined a management structure and process to formally manage EA change. According to its configuration management plan (dated February 3, 2005), the bureau is using an automated tool to manage critical EA work products as they are developed and changed. Further, the bureau established a Change Management Board to resolve critical issues, including those that require a major commitment of resources, vary from the EA strategy, or require a policy change. Beyond these two elements, 14 core elements in Stages 4 and 5 have yet to be satisfied. In particular, key architecture products have yet to be completed. As previously noted, the bureau is still in the process of developing both its “as is” and “to be” architectures, for example. The sequencing plan is also a work in process. (A summary of the results of our assessment on the FBI’s satisfaction of the core elements for each of the stages are provided in app. III.) Discussing the bureau’s EA program, the FBI’s CIO said that significant progress has been made, which he attributed to top-level organizational commitment and focus on EA, as well as assignment of bureauwide IT budget control and authority to the CIO. Despite this progress, much remains to be accomplished before the FBI will have an effective EA program. According to our framework, effective architecture management is generally not achieved until an enterprise has a completed and approved architecture that is being effectively maintained and is being used to leverage organizational change and support investment decision making; having these characteristics is equivalent to having satisfied all of the Stage 2 and 3 core elements and many of the Stage 4 and 5 elements. Until the bureau gets to that stage, it will be challenged in its efforts to implement modernized systems in a way that minimizes overlap and duplication and maximizes integration and mission support. Our prior reviews of federal agencies and research of architecture best practices have shown that attempting to modernize systems without a well-defined and verifiable architecture and associated management capabilities increases the risk that large sums of money and much time and effort will be invested in technology solutions that are duplicative, are not well integrated, are unnecessarily costly to maintain and interface, and do not effectively optimize mission performance. Federal acquisition regulations and relevant IT acquisition management guidance recognize the importance of effectively managing contractor activities. The Federal Acquisition Regulation (FAR), for example, directs agencies to use performance-based contracting to the maximum extent practicable when acquiring most services. Under the FAR, performance- based contracting includes (1) defining the work to be performed in measurable, results-oriented terms; (2) specifying performance standards (quality and timeliness) that are tied to contractual requirements; (3) having a quality assurance plan that describes how the contractor’s performance in meeting requirements will be measured against standards; and (4) establishing positive and negative contractor performance incentives. The FAR and associated regulations also require government oversight of contracts to ensure that the contractor (the service provider) performs the requirements of the contract, and the government (the service receiver or customer) receives the service as intended. However, the regulations do not prescribe specific methods for this oversight. Other acquisition management guidance identifies effective contractor tracking and oversight as a key activity and describes a number of practices associated with this activity, including establishing a written policy for contract tracking and oversight, designating responsibility for contract tracking and oversight activities, establishing a group that is responsible for managing contract tracking and oversight activities, and using approved contractor planning documents as a basis for tracking and overseeing the contractor. The FBI’s approach to managing its EA contract does not include most of the performance-based contracting features described in the FAR. Specifically, although the contract’s statement of work defines when products are due (i.e., timeliness standards), it does not specify the products in results-oriented, measurable terms. For example, the statement of work defines requirements in terms of general product descriptions such as “as is” and “to be” architectures and a sequencing plan. Further, it does not specify quality standards for products and does not define incentives for addressing either timeliness or quality standards. The bureau also does not have plans for assuring the quality of the contractor’s work. Instead, bureau officials told us that they follow the bureau’s long-standing approach of working with the contractor to determine whether each deliverable is acceptable. As an example, the bureau received a draft of its “as is” architecture on August 22, 2004. According to bureau officials, the draft was of poor quality, and the bureau did not accept it. The bureau then worked with the contractor to improve the quality of the product, and after several iterations, the bureau accepted a draft of the “as is” architecture on September 30, 2004. However, because the bureau did not have either quality standards or a quality assurance plan, the basis for acceptance was not available for us to independently assess. In tracking and overseeing its contractor, the FBI also has not employed the kind of effective practices specified in relevant acquisition management guidance. For example, the bureau does not have a written policy to govern its tracking and oversight activities, has not designated responsibility or established a group for performing contract tracking and oversight activities, and has not developed an approved contractor monitoring plan. Instead, the bureau holds weekly status meetings with its EA contractor to discuss progress and plans, and it is receiving incremental drafts of work products in an effort to increase visibility into contractor activities and thereby minimize the number of unacceptable deliverables and associated rework. FBI officials from the offices of the Chief Financial Officer and CIO attributed the current contract management approach to several factors. First, they said that the FBI has historically been challenged in developing statements of work that clearly define requirements and establish performance (quality and timeliness) standards, which are essential to effective performance-based contracting. Second, these officials stated that they are still working to define effective contract management controls. Specifically, as part of the CIO office’s transformation, including implementing its recently assigned agencywide authority and control over IT resources, these officials are developing standard policies and procedures for managing IT. In particular, these policies and procedures are to include an FBI-wide standard life-cycle management directive that is to define procedures for the use of performance-based contracting methods and the establishment of tracking and oversight structures, policies, and processes. The officials told us they began implementing parts of the directive in late June 2005, but added that certain key practices, such as acquisition management, were early drafts and required further development. However, the officials were unable to provide a date for when the drafts would be finalized and implementation of the practices would begin. In the absence of performance-based contracting and effective tracking and oversight, the bureau’s ability to effectively manage its EA contractor is constrained. This means that the FBI is at risk of taking more time and spending more money than necessary to produce a well-defined architecture. Having a well-defined and enforced architecture is critical to the FBI’s ability to effectively and efficiently modernize its mission operations and supporting IT environment. The bureau has taken steps aimed at developing such an architecture and has made important progress in doing so; however, much remains to be accomplished before it will have implemented our prior recommendations and established an effective EA program. As it moves forward, it is important for the bureau to employ all the effective architecture management practices that we have previously recommended, and to do so expeditiously. Moreover, given that the FBI’s program is heavily relying on contractor support, it is also important for the bureau to ensure that it employs effective contract management controls that will enable it to, among other things, define contractor work to be performed in measurable, results-oriented terms; establish positive and negative contractor performance incentives; and define and implement contractor tracking and oversight processes consistent with acquisition management guidance. Currently, the FBI does not have such controls in place, and as a result, it is increasing the risk that it will take more time and money to develop a well-defined EA than is necessary. If the bureau does not begin employing the kind of effective contract management controls contained in federal regulations and related guidance, its architecture efforts will continue to be at risk. In turn, its systems modernization will continue to be challenged in its ability to efficiently and effectively support mission operations through modern IT systems. In light of our prior comprehensive set of recommendations for strengthening the FBI’s EA program, we are not making additional recommendations at this time relative to satisfying the practices embodied in our architecture management maturity framework. Given the FBI’s heavy reliance on contractor assistance in developing its EA and the state of its contract management controls, we recommend that the FBI Director direct the Chief Financial Officer, in conjunction with the CIO, to ensure that to the maximum extent practicable, performance-based contracting activities, along with effective contract tracking and oversight practices, are employed prospectively on all EA contract actions. This should include, among other things, defining contractor work in measurable, results-oriented terms; establishing positive and negative contractor performance incentives; and defining and implementing contractor tracking and oversight processes consistent with acquisition management guidance. In written comments on a draft of this report, signed by the CIO and reprinted in appendix IV, the FBI agreed that the bureau had made progress in developing its architecture. The FBI also stated that it appreciated our assessment and feedback on its EA program and that the bureau would continue to strive to develop a robust EA program supported by effective contract management practices. In this regard, the FBI cited steps under way to strengthen its EA management foundation. The FBI also noted our recommendation regarding the use of performance-based contracting, stating that its use of fixed-price contracting for EA support has been successful. We believe the FBI can benefit from increased use of performance-based contracting techniques even under firm, fixed-priced contracts. In this regard, the FBI agreed, stating that our recommendations provide for effective EA contract management practices and that it is now taking steps to increase its use of performance-based contracting. The FBI stated that it is in the process of increasing employee awareness and providing training on the performance-based approach. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate and House Appropriations Committees. We are also sending copies to the Attorney General; the Director, FBI; the Director, OMB; and other interested parties. This report will also be available at no charge on our Web site at http://www.gao.gov. Should you have any questions about matters discussed in this report, please contact me at (202) 512-3439 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO contacts and staff who made major contributions to this report are listed in appendix V. As specified in the conference report accompanying the Consolidated Appropriations Act, 2005, our objectives were to determine (1) whether the Federal Bureau of Investigation (FBI) is managing its enterprise architecture (EA) program in accordance with established best practices and (2) what approach the bureau is following to track and oversee its EA contractor, including the use of effective contractual controls. For the first objective, we reviewed our EA management maturity framework, Version 1.1, which organizes architecture management best practices into five stages of maturity. This framework is based on A Practical Guide to Federal Enterprise Architecture, published by the federal Chief Information Officer (CIO) Council. We compared our framework with the ongoing efforts of the FBI’s EA program. Specifically, we analyzed the bureau’s EA plans and products, including program management and other plans, key architecture principles, work breakdown structures and corresponding milestones, Enterprise Architecture Board charters and meeting minutes, repository strategy, and EA status reports. We also analyzed relevant policies and procedures, including the bureau’s EA Policy and the Information Technology Life Cycle Management Directive. Moreover, we reviewed draft architecture work products, including iterations of the “as is” and “to be” architectures; we did not, however, assess the contents or quality of these architectural work products because they were in varying degrees of completion and subject to ongoing change. Next, we compared our analyses with the EA management maturity framework practices to determine the extent to which the FBI was employing such effective management practices. We also interviewed bureau officials, such as the CIO, the chief architect, and the head of the EA program office. For the second objective, we first reviewed key federal regulations and best practices and guidance. In particular, we reviewed relevant federal acquisition regulations on effective contract management, including performance-based contracting methods. Additionally, we reviewed the Software Engineering Institute’s Software Acquisition Capability Maturity Model, version 1.02, for key contractor tracking and oversight best practices. We then analyzed EA contract documentation, including task orders, statements of work, and contract modifications. We also interviewed FBI officials, including the contracting office’s technical representative for overseeing the EA contractor, the chief architect, and the head of the EA program office. We interviewed these officials to verify and clarify our understanding of the bureau’s architecture contract management procedures and to determine whether the bureau is employing effective contractual controls. Additionally, we discussed with these officials the cause and impact of the current state of the bureau’s contract management activities and policies. We performed our work at FBI headquarters in Washington, D.C., from September 2004 to July 2005, in accordance with generally accepted government auditing standards. Because the task of developing, maintaining, and implementing an EA is an important, complex, and difficult endeavor, doing so effectively and efficiently requires that rigorous, disciplined management practices be adopted. Such practices form the basis of our EA management maturity framework, which specifies by stages the key architecture management structures, processes, and controls that are embodied in federal guidance and best practices. The five stages and their associated core elements are described below. At Stage 1, organizations are becoming aware of the value of an EA, but have not yet established the management foundation needed to develop one. Stage 1 has no core elements: by default, an organization that does not satisfy Stage 2 core elements is at Stage 1. For Stage 2, our framework specifies nine key practices or core elements that are necessary to provide the management foundation for successfully launching and sustaining an architecture effort: Ensure that adequate resources exist. An organization should have the resources (funding, people, tools, and technology) to establish and effectively manage its architecture. This includes identifying and securing adequate funding to support EA activities; hiring and retaining the right people with the proper knowledge, skills, and abilities to plan and execute the EA program; and selecting and acquiring the right tools and technology to support EA activities. Establish a committee or group representing the enterprise that is responsible for directing, overseeing, or approving the EA. This committee should include executive-level representatives from each line of business, and these representatives should have the authority to commit resources and enforce decisions within their respective organizational units. By establishing this enterprisewide responsibility and accountability, the agency demonstrates its commitment to building the management foundation and obtaining buy-in from across the organization. Establish a program office that is responsible for EA development and maintenance. This organizational unit should be devoted to the EA program and responsible for developing a management plan and executing the plan. The plan should include a detailed work breakdown structure; resource estimates (e.g., funding, staffing, and training); performance measures; and management controls for developing and maintaining the architecture. Appoint a chief architect. The chief architect should be responsible and accountable for the EA, supported by the architecture program office, and overseen by the architecture steering committee. The chief architect (in collaboration with the CIO, the architecture steering committee, and the organizational head) is instrumental in obtaining organizational buy-in for the architecture, including support from the business units, as well as in securing resources to support architecture management functions such as risk management, configuration management, quality assurance, and security management. Use a framework, methodology, and automated tool to develop the architecture. The framework provides a formal structure for representing the EA, while the methodology is the common set of procedures that the enterprise is to follow in developing the architecture products. The automated tool serves as a repository where architectural products are captured, stored, and maintained. Develop an architecture program management plan. This plan specifies how and when the architecture is to be developed. It includes a detailed work breakdown structure; resource estimates (e.g., funding, staffing, and training); performance measures; and management controls for developing and maintaining the architecture. The plan demonstrates the organization’s commitment to managing architecture development and maintenance as a formal program. Ensure that EA plans call for describing both the “as is” and “to be” environments in terms of business, performance, information/data, application/service, and technology. An organization’s program management plan should provide for defining and normalizing the current and future architectures in terms relevant to stakeholders from varying organization levels and disciplines. Ensure that EA plans address security at each layer. Plans should define how the organization will address security as a distinct area of operational and technology emphasis within the context of each layer. Ensure that EA plans call for developing metrics for measuring EA progress, quality, compliance, and return on investment. Plans should provide for developing metrics and should describe how these will be used to measure (1) progress towards EA goals, (2) the quality of architecture products and management processes, (3) compliance with the architecture, and (4) EA return on investment. At Stage 3, our framework specifies six core elements that are necessary to focus on architecture development activities: Issue a written and approved organization policy for EA development. A policy defines the scope of the architecture, including the requirement for a description of the current and target architectures, as well as an investment road map or sequencing plan specifying the move between the two. Ensure that EA products are under configuration management. This involves ensuring that changes to products are identified, tracked, monitored, documented, reported, and audited. Ensure that EA products describe or will describe both the “as is” and the “to be” environments, as well as a sequencing plan. Consistent with the EA program plans discussed in Stage 2, an organization should ensure that the EA products being developed are enterprisewide in scope and describe both the current and future environments, as well as a sequencing plan for moving from the current to the target environment. Ensure that EA plans are described or will be described for both environments in terms of business, performance, information/data, application/service, and technology. Products being developed or drafted should begin to address each of the given terms of reference, or include placeholders for later defining the enterprise in these terms. Ensure that business, performance, information/data, application/service, and technology descriptions address or will address security. This involves ensuring that each EA product (including those describing the “as is” and “to be” environments in terms of business, performance, information/data, application/service, and technology) explicitly describe how enterprise security is being defined and will be implemented. Ensure that progress against EA plans is measured and reported. To assist in attaining stated EA program goals and objectives, an organization should understand and disclose its progress against plans. As EA products emerge, their content should be assessed against the plans to ensure that expectations are being met. At Stage 4, during which organizations focus on architecture completion activities, organizations need to satisfy eight core elements: Issue a written and approved organization policy for EA maintenance. A policy promotes enterprisewide commitment to keeping the EA up to date. It should provide for establishing a process for architecture maintenance, including oversight and control. It should also identify the roles, responsibilities, and relationships of key players in the maintenance process. Ensure that EA products and management processes undergo independent verification and validation. This core element involves having an independent third party—such as an internal audit function or a contractor that is not involved with any of the architecture development activities—verify and validate that the products were developed in accordance with architecture processes and product standards. Doing so provides organizations with needed assurance of the quality of the architecture. Ensure that EA products describe both the “as is” and the “to be” environments, as well as a sequencing plan. Consistent with the EA program plans discussed in Stage 2, an organization should ensure that the EA products completely and correctly describe both the “as is” and the “to be” environments of the enterprise and include a sequencing plan for migrating the organization between the two environments. Ensure that EA products for both environments are described in terms of business, performance, information/data, application/service, and technology. An organization’s EA products should be defined and normalized in terms meaningful to a wide variety of stakeholders, ranging from the organization’s chief executive officer and strategic planners to its technology implementers and operators. Ensure that business, performance, information/data, application/service, and technology descriptions address security. An organization should explicitly and consistently address security in its business, performance, information/data, application/service, and technology architecture products. Because security permeates every aspect of an organization’s operations, the nature and substance of institutionalized security requirements, controls, and standards should be captured in the EA products. Ensure that the organization’s chief information officer has approved the current version of the EA. The current version of the organization’s completed EA should be approved by the CIO. Ensure that a committee or group representing the enterprise or the investment review board has approved the current version of the EA. The current version of the organization’s completed architecture should also be approved either by the EA steering committee or by the investment review board. Measure and report on the quality of EA products. An organization should ensure that the nature and content of the EA products meet defined quality standards. This core element entails developing a set of metrics and assessing the products against those metrics. At Stage 5, during which the focus is on architecture maintenance and implementation activities, organizations need to satisfy eight core elements: Issue a written and approved organization policy for information technology (IT) investment compliance with the EA. A policy that governs the implementation of the architecture should be approved by the organization head. The EA policy should augment architecture development and maintenance policies by providing for an institutional EA implementation process that is aligned with the organization’s capital planning and investment control process. Ensure that the organization has a process to formally manage EA change. A formal process should be defined and implemented for introducing changes to the architecture. This process should recognize both internally and externally prompted change, and it should provide for continuous capture and analysis of change proposals and informed decision making about whether to make changes. Make the EA an integral component of the IT investment management process. Because the road map defines the IT systems that an organization plans to invest in as it transitions from the “as is” to the “to be” environment, the architecture is a critical frame of reference for making IT investment decisions. Using the architecture when making such decisions is important because organizations should approve only those investments that move the organization toward the “to be” environment, as specified in the road map. Ensure that EA products are periodically updated. An organization will need to periodically update its EA products depending on the volume and degree of approved changes to the EA. Ensure that IT investments comply with EA. An organization’s IT investments should be aligned and comply with the applicable components of the current version of the EA, and they should not be selected and approved under the organization’s capital planning and investment control process unless compliance is documented by the investment sponsor and substantiated by the architect assessment team. Ensure that the organization head has approved the current version of the EA. The current version of the EA should ultimately be approved by the head of the organization. Measure and report return on EA investment. Like any investment, the architecture should produce a return on investment (i.e., a set of benefits), and this return should be measured and reported in relation to costs. Measuring return on investment is important in order to ensure that expected benefits from the architecture are realized and to share this information with executive decision makers, who can then take corrective action to address deviations from expectations. Measure and report on compliance with the EA. An organization should define metrics, such as number of compliance waivers requested and number granted, to track compliance. Through such measurement and reporting, relevant trends and anomalies can be identified, and corrective action can be taken. Satisfied? Agency is aware of EA. The FBI has acknowledged the need for an EA, and the Director has made its development a management priority. Adequate resources exist. According to FBI officials, they have identified the financial and human capital resources needed to effectively manage the bureau’s architecture program. While bureau officials stated they have adequate financial resources to fund the program, including sufficient contractor assistance, four of five core architect positions identified as being needed to staff the program office have not yet been filled. Committee or group representing the enterprise is responsible for directing, overseeing, or approving EA. The FBI has established an Enterprise Architecture Board to direct, oversee, and approve the EA. The board includes upper-level management from all the operating units, including the counterterrorism, counterintelligence, and finance divisions. Technical representatives, such as the chief technology officer and chief architect, also serve on this board. Program office responsible for EA development and maintenance exists. The FBI has established a program office, called the Enterprise Architecture Unit, which is located in the CIO’s office. The program office is responsible for the development, implementation, and maintenance of the EA. Chief architect exists. The FBI has designated a chief architect. EA is being developed using a framework, methodology, and automated tool. The FBI initially used the Federal Enterprise Architecture Framework and has since switched to OMB’s five Federal Enterprise Architecture Reference Models. The bureau is using the Popkin System Architect tool. However, the bureau does not have a documented methodology that defines how EA products are to be developed. Instead of a defined methodology, the bureau is relying on a combination of its chief architect’s knowledge and certain documentation, such as an EA alignment plan that describes, among other things, the products to be developed, the order in which they are to be developed, the relationships among products, and analyses that are to be performed to help identify gaps and redundancies in the contents of these products. However, this documentation does not include either the specific steps or methods that explain how the content of products is to be developed and documented. Satisfied? EA plans call for describing the “as is” and “to be” environments, and a sequencing plan. The EA program management plan (dated October 2004) calls for the development of “as is” and “to be” environments as well as a sequencing plan. EA plans call for describing the enterprise in terms of business, performance, information/data, application/service, and technology. The FBI’s EA baseline report (dated March 2005) and other plans call for the development of business, performance, data, applications, and technology descriptions. EA plans call for business, performance, information/data, application/service, and technology descriptions to address security. The FBI’s EA baseline report (dated March 2005) and other plans call for security services to be defined for each of the descriptions. EA plans call for developing metrics for measuring EA progress, quality, compliance, and return on investment. The EA policy (dated August 2003) and program management plan call for developing metrics to measure progress, quality, and return on investment. Written and approved organization policy exists for EA development. The FBI has a written policy for EA development (dated August 2003) that was approved and signed by the CIO. EA products are under configuration management. The bureau has a configuration management plan that defines management structures and processes for identifying, tracking, monitoring, reporting, and auditing changes to the architecture products. EA products, such as the program management plan, EA principles, initial versions of the “as is” architecture, and EA software tool, have been identified and placed under configuration management in accordance with the plan. EA products describe or will describe the enterprise’s business, performance, information/data, application/service, and the technology that supports them. The FBI is in the process of developing its “as is” and “to be” architectures. It reports that to date, it has issued what it describes as “high level” versions of each, but that these versions need additional work to be complete. The initial version of the “to be” includes the enterprise's business, performance, information/data, service, and technology descriptions. The latest draft of the “as is” also includes all of these descriptions, except performance. According to FBI officials, performance was omitted due to an oversight on their part, and they intend to address performance in the next version of the “as is” architecture. EA products describe or will describe the “as is” and the “to be” environments, and a sequencing plan. The FBI is in the process of developing its “as is” and “to be” architectures. It reports that to date, it has issued what it describes as “high level” versions of each, but that these versions need additional work to be complete. The FBI also reports that it has developed a “high level” description of a sequencing plan that is not yet complete. Satisfied? Business, performance, information/data, application/service, and technology address or will address security. The FBI is in the process of developing its “as is” and “to be” architectures, as described above. These versions of its architectures include a description of security services. According to FBI officials, these versions are not yet complete. Progress against EA plans is measured and reported. The FBI is measuring and reporting progress against EA plans. Written and approved organization policy exists for EA maintenance. The FBI does not have a written and approved policy for EA maintenance. While the bureau has an EA development policy, it does not address architecture maintenance, nor does it assign responsibility and accountability for maintenance. EA products and management processes undergo independent verification and validation. While EA products and processes to date have not been independently verified and validated, the FBI hired a contractor in April 2005 to begin performing such assessments on both the EA products and the processes used to develop them. EA products describe the enterprise’s business, performance, information/data, application/service, and the technology that supports them. Initial EA products describe the enterprise’s business, performance, information/data, application/service, and the technology that supports them. However, the FBI reports that these products are not completed. EA products describe the “as is” and the “to be” environments, and a transitioning (sequencing) plan. Initial EA products describe the “as is” and the “to be” environments and a sequencing plan. However, the FBI reports that these products are not completed. Business, performance, data, application, and technology descriptions address security. Initial EA products include business, performance, information/data, application/service, and the technology descriptions that address security. However, the FBI reports that these products are not completed. Organization’s chief information officer has approved current version of EA. The FBI is in the process of completing its EA, and when completed, the CIO plans to approve it. Committee or group representing the enterprise or the investment review board has approved current version of EA. The FBI is in the process of completing its EA, and when completed, the Enterprise Architecture Board plans to approve it. Quality of EA products is measured and reported. Although the FBI is in the process of completing its EA products, it is not currently measuring and reporting quality. FBI plans call for the bureau to begin measuring and reporting EA product quality starting in fiscal year 2006. Written and approved policy exists for IT investment compliance with EA. The FBI does not have a written and approved policy addressing IT investment compliance with EA. Satisfied? Process exists to formally manage EA change. The FBI configuration management plan defines a process to formally manage EA change. To manage the process, the bureau established a change management board in January 2003. The board reviews and determines whether to approve changes to the current FBI environment. EA is integral component of IT investment management process. The FBI is in the process of completing its EA, and thus, it is not yet an integral part of the bureau’s IT investment process. The FBI is in the process of completing its EA, and when it is complete, bureau plans call for the products to be periodically updated. IT investments comply with EA. All IT investments are not evaluated for compliance with a completed EA. Organization head has approved current version of EA. The FBI does not yet have a completed EA for the Director to approve. Return on EA investment is measured and reported. The FBI is not yet measuring and reporting return on investment. Compliance with EA is measured and reported. The FBI is not yet measuring and reporting EA compliance. To achieve a particular stage includes satisfying the specified elements in the stage plus all elements from previous stages. For example, to achieve Stage 3 requires achieving the Stage 3-specific elements plus those in Stages 1 and 2. In addition to the contact named above, the following people made key contributions to this report: Gary Mountjoy, Assistant Director; Barbara Collier; Lori Martinez; Teresa Neven; and William Wadsworth. | The Federal Bureau of Investigation (FBI) is currently modernizing its information technology (IT) systems to support its efforts to adopt a more bureauwide, integrated approach to performing its mission. A key element of such systems modernization programs is the use of an enterprise architecture (EA), which is a blueprint of an agency's current and planned operating and systems environment, as well as an IT investment plan for transitioning between the two. The conference report accompanying FBI's fiscal year 2005 appropriations directed GAO to determine (1) whether the FBI is managing its EA program in accordance with established best practices and (2) what approach the bureau is following to track and oversee its EA contractor, including the use of effective contractual controls. The FBI is managing its EA program in accordance with many best practices, but other such practices have yet to be adopted. These best practices, which are described in GAO's EA management maturity framework, are those necessary for an organization to have an effective architecture program. Examples of practices that the bureau has implemented include establishing a program office that is responsible for developing the architecture, having a written and approved policy governing architecture development, and continuing efforts to develop descriptions of the FBI's "as is" and "to be" environments and sequencing plan. The establishment of these and other practices represents important progress from the bureau's status 2 years ago, when GAO reported that the FBI lacked both an EA and the means to develop and enforce one. Notwithstanding this progress, much remains to be accomplished before the FBI will have an effective EA program. For example, the EA program office does not yet have adequate resources, and the architecture products needed to adequately describe either the current or the future architectural environments have not been completed. Until the bureau has a complete and enforceable EA, it remains at risk of developing systems that do not effectively and efficiently support mission operations and performance. The FBI is relying heavily on contractor support to develop its EA; however, it has not employed effective contract management controls in doing so. Specifically, the bureau has not used performance-based contracting, an approach that is required by federal acquisition regulations whenever practicable. Further, the bureau is not employing the kind of effective contractor tracking and oversight practices specified in relevant acquisition management guidance. According to FBI officials, the agency's approach to managing its EA contractor is based on its long-standing approach to managing IT contractors: that is, working with the contractor on iterations of each deliverable until the bureau deems it acceptable. This approach, in GAO's view, is not effective and efficient. According to FBI officials, as soon as the bureau completes an ongoing effort to redefine its policies and procedures for managing IT programs (including, for example, the use of performance-based contracting methods and the tracking and oversight of contractor performance), it will adopt these new policies and procedures. Until effective contractor management policies and procedures are defined and implemented on the EA program, the likelihood of the FBI effectively and efficiently producing a complete and enforceable architecture is diminished. |
State’s $162 million Biometric Visa Program is designed to work hand-in- hand with the DHS multibillion-dollar US-VISIT program. Both programs aim to improve U.S. border security by verifying the identity of persons entering the United States. Both programs rely on the DHS Automated Biographic Identification System, known as IDENT, which is a repository of fingerprints and digital photographs of persons who either have applied for U.S. visas since the inception of the program in September 2003, have entered the United States at one of 115 air or 14 sea ports of entry since January 2004, or are on a watch list—whether for previous immigration violations or as part of the FBI’s database of terrorists and individuals with felony convictions. The process for determining who will be issued a visa consists of several steps. When a person applies for a visa at a U.S. consulate, a fingerprint scan is taken of his right and left index fingers. These prints are then transmitted from the overseas post through servers at State to DHS’s IDENT system, which searches its records and sends a response back through State to the post. A “hit” response—meaning that a match to someone previously entered in the system was found—prevents the post’s computer system from printing a visa for the applicant until the information is reviewed and cleared by a consular officer. According to State data, the entire process generally takes about 30 minutes. If the computer cannot determine if two sets of prints match, IDENT refers the case to DHS fingerprint experts, who have up to 24 hours to return a response to State (see fig. 1). US-VISIT aims to enhance national security, facilitate legitimate trade and travel, contribute to the integrity of the U.S. immigration system, and adhere to U.S. privacy laws and policies by collecting, maintaining, and sharing information on certain foreign nationals who enter and exit the United States; identifying foreign nationals who (1) have overstayed or violated the terms of their visit; (2) can receive, extend, or adjust their immigration status; or (3) should be apprehended or detained by law enforcement officials; detecting fraudulent travel documents, verifying traveler identity, and determining traveler admissibility through the use of biometrics; and facilitating information sharing and coordination among appropriate agencies. The process by which a foreign national is screened for entry is as follows: When a foreign national arrives at a port of entry to the United States, a DHS inspector scans the machine-readable travel documents. Existing records on the foreign national, including biographic lookout hits are returned. The computer presents available biographic information and a photograph and determines whether IDENT contains existing fingerprints for the foreign national. The inspector then scans the foreign national’s fingerprints (left and right index fingers) and takes a photograph. This information is checked against stored fingerprints in IDENT. If no matching prints are in IDENT, the foreign national is enrolled in US-VISIT (i.e., biographic and biometric data are entered). If the foreign national’s fingerprints are already in IDENT, the system performs a comparison of the fingerprint taken at the port of entry to the one on file to confirm that the person submitting the fingerprints is the person on file. If the system finds a mismatch of fingerprints or a watch list hit, the foreign national is held for further screening or processing. State’s implementation of the technology aspects of the biometric visa program is currently on schedule to meet the October 26, 2004, deadline. According to State officials, a well-planned rollout of equipment and software and fewer technical problems than anticipated led to smooth implementation of the technological aspects of the program at the 201 posts that had the program operating as of September 1, 2004. But amid the fast pace of rolling out the program to meet the deadline, DHS and State have not provided comprehensive guidance for consular posts on how the information about visa applicants made available through the Biometric Visa Program should best be used to help adjudicate visas. Indeed, we found several significant differences in the implementation of the biometric program during our visits to San Salvador, El Salvador, and Santo Domingo, Dominican Republic. State acknowledged that posts may be implementing the program in various ways across the 207 consular posts that issue nonimmigrant visas. According to State officials, the implementation process for the biometric program led to far fewer technical problems than expected. Early on, State had a few difficulties in transmitting data between the posts and DHS’s IDENT, primarily related to server and firewall (computer security) issues. According to State, most issues were resolved within a few days. In fact, 201 nonimmigrant visa (NIV)-issuing posts out of 207 had the software and hardware installed and were transmitting prints to IDENT for analysis as of September 1, 2004. State anticipates the completion of the installation by the October 2004 deadline. According to State’s data, from February to August 2004, the total biometric visa process averaged about 30 minutes for an applicant’s prints to be sent from an overseas post to the State server, and on to DHS for IDENT analysis and then for the response to be returned through State’s server to the posts. IDENT response time could affect visa issuance times because a visa cannot be issued until the post has received and reviewed the IDENT response. Our observations at posts in San Salvador and Santo Domingo demonstrated the importance of the length of time required to receive an IDENT response. We observed that most interviews average only a few minutes, but the IDENT response time currently is 30 minutes. Thus, if interviewing officers collect prints during the interview, the interview would be completed before the IDENT response would be available to consular officers. Since the visa cannot be issued until the IDENT information is considered by the consulate, potential delays in the IDENT response times could have a major effect on the visa issuance process and inconvenience visa applicants. State has encouraged consular officials to issue visas the day after interviews since part of the visa process now relies on another agency’s system. This will require significant changes for posts such as Santo Domingo, which still issues same-day visas. State has focused on implementing the Biometric Visa Program by the mandated deadline; however, our report identifies certain lags in guidance on how the program should be implemented at consular posts. State and DHS have not yet provided to posts details of how all aspects of the program will be implemented, including who should scan fingerprints, where and who should review information about applicants returned from IDENT, and response times for the IDENT system. In addition, DHS and State have not provided comprehensive guidance for consular posts on how the information about visa applicants made available through the Biometric Visa Program should be used to help adjudicate visas. We believe that it is important for State and DHS to articulate how the program could best be implemented, providing a roadmap for posts to develop implementation plans that incorporate the guidance. We recognize, however, that the workload, personnel and facility resources vary considerably from post to post. As a result, each post may not be able to easily implement the Biometric Visa Program according to a precise set of guidelines. However, posts could develop procedures to implement the guidance, identify resource and facility constraints, and implement mitigating actions to address their own unique circumstances. Therefore, we have recommended that DHS and State provide comprehensive guidance to consular posts on how information about visa applicants that is now available from IDENT should be used to help adjudicate visas. In responding to our recommendation, DHS generally concurred and State acknowledged that there may be a lag in guidance. Our work at two posts shows that, because they lack specific guidance on the system’s use, consular officers at these overseas posts are uncertain how they should implement the Biometric Visa Program and are currently using the returned IDENT responses in a variety of ways. For example, we found that, in cases in which the IDENT response information is available to the overseas post by the time of the visa applicant interview, some consular officers who conduct interviews review information before the interview, some review it during the interview, and some rely instead on a designated officer or the line chief to review the information after the interview is completed and before affected visas are printed. We found several differences in the visa operations at two posts—San Salvador, El Salvador, and Santo Domingo, Dominican Republic—that handle a large volume of visa applications. For example, San Salvador, one of the first posts to begin implementing the program in September 2003, has a large new embassy complex that allowed the post great flexibility in implementing the collection of biometrics. Applicants are led through outdoor security screening before entering the interview waiting room. Once in the waiting room, they immediately proceed to a fingerprint scanning window where an American officer verifies their names and photographs and scans their fingerprints. By the time they arrive at their interview windows, usually the interviewing officer has received their IDENT responses. However, the post has designated one officer to review all of the IDENT responses, so some interviewing officers do not take the time to review IDENT information on those they interview even if the information is available at the time of the interview. Santo Domingo’s consular section is hampered by significant facility constraints. The NIV applicant waiting area is very cramped and has been even more restricted over recent months due to construction efforts. Some of the NIV applicants are forced to share space in the immigrant visa waiting area. Santo Domingo has fewer interviewing windows than San Salvador and cannot easily spare one to designate for fulltime fingerprint scanning due to high interview volume. Some interviewing officers scan applicants’ fingerprints at the time of the interview, so the interview ends before the IDENT response has been returned from DHS. One consular officer is designated to review the IDENT responses for all of the applicants, and interviewing officers may not see IDENT information on the applicants they interview. In some cases, the designated officer determines if the applicant should receive a visa, and in others he brings the IDENT information back to the original interviewing officer for the case for further review. Since September 11, 2001, we have issued reports recommending that State and DHS work together to improve several aspects of border security and the visa process, as described below. These reports show the importance of joint, coordinated actions by State and DHS to maximize program effectiveness. The US-VISIT program supports a multifaceted, critical mission: to help protect approximately 95,000 miles of shoreline and navigable waterways through inspections of foreign nationals at U.S. ports of entry. DHS has deployed an initial operating capability for entry to 115 airports and 14 seaports. It has also deployed an exit capability, as a pilot, at two airports and one seaport. Since becoming operational, DHS reports that more than eight million foreign nationals have been processed by US-VISIT at ports of entry, resulting in hundreds being denied entry. Its scope is large and complex, connecting 16 existing information technology systems in a governmentwide process involving multiple departments and agencies. In addition to these and other challenges, the program’s operational context, or homeland security enterprise architecture, is not yet adequately defined. DHS released an initial version of its enterprise architecture in September 2003; however, we found that this architecture was missing, either partially or completely, all the key elements expected in a well-defined architecture, such as descriptions of business processes, information flows among these processes, and security rules associated with these information flows. DHS could benefit from such key elements to help clarify and optimize the relationships between US-VISIT and other homeland security programs operations, such as State’s Biometric Visa Program, both in terms of processes and the underlying information technology infrastructure and applications. Although the biometrics program is administered by State, it falls under the overall visa policy area of the DHS Directorate of Border and Transportation Security, and is part of our national homeland security mission. State officials indicated that they are waiting for DHS to further define US-VISIT, which would help guide State’s actions on the Biometric Visa Program. Since September 11, 2001, our work has demonstrated the need for State and DHS to work together to better address potential vulnerabilities in the visa process. In June 2003, we identified systemic weaknesses in the visa revocation process, many of which were the result of a failure to share and fully utilize information. We reported that the visa revocation process was not used aggressively to share information among agencies on individuals with visas revoked on terrorism grounds. It also broke down when these individuals had already entered the United States prior to revocation. Immigration officials and the Federal Bureau of Investigation (FBI) were not then routinely taking actions to investigate, locate, or resolve the cases of individuals who remained in the United States after their visas were revoked. Therefore, we recommended that DHS, in conjunction with the Departments of State and Justice, develop specific policies and procedures to ensure that appropriate agencies are notified of revocations based on terrorism grounds and take proper actions. In July 2004, we followed up on our findings and recommendations regarding interagency coordination in the visa revocation process and found that State and DHS had taken some actions in the summer of 2003 to address these weaknesses. However, our review showed that some weaknesses remained. For instance, in some cases State took a week or longer to notify DHS that individuals with revoked visas might be in the country. Without these notifications, DHS may not know to investigate those individuals. Given outstanding legal and policy issues regarding the removal of individuals based solely on their visa revocation, we recommended that the Secretaries of Homeland Security and State jointly (1) develop a written governmentwide policy that clearly defines roles and responsibilities and sets performance standards and (2) address outstanding legal and policy issues in this area or provide Congress with specific actions it could take to resolve them. State agreed to work together with DHS to address these recommendations. In February 2004, we reported that the time it takes to adjudicate a visa for a science student or scholar depends largely on whether an applicant must undergo a security check known as Visas Mantis, which is designed to protect against sensitive technology transfers. Based on a random sample of Visas Mantis cases for science students and scholars, we found it took an average of 67 days for the interagency security check to be processed and for State to notify the post. We also found that the way in which Visas Mantis information was disseminated at headquarters made it difficult to resolve some cases expeditiously. Finally, consular staff at posts we visited stated that they lacked clear guidance on the Visas Mantis program. While State and FBI officials acknowledged there had been lengthy waits, they reported having measures under way to improve the process and to identify and resolve outstanding Visas Mantis cases. We recommended that the Secretary of State, in coordination with the Director of the FBI and the Secretary of Homeland Security, develop and implement a plan to improve the Visas Mantis process. We are currently reviewing the measures these agencies have taken to improve the Visas Mantis program made since our February report and will report on our findings at the beginning of next year. Overall, we have reported on a number of areas in which joint, coordinated actions by DHS and State are needed to improve border security and visa processing. In commenting in our report of State’s biometric program, both DHS and State have pledged their commitment to continued cooperation and joint actions. Indeed, these agencies are currently working together as part of the US-VISIT program. For example, State participates in two DHS-led groups designed to oversee and manage the US-VISIT program. First, State participates on the US-VISIT Federal Stakeholders Advisory Board, which provides guidance and direction to the US-VISIT program. State also participates as part of the US-VISIT Integrated Project Team, which meets weekly to discuss, among other things, operational issues concerning the deployment of US-VISIT. Mr. Chairman, overall, our work has demonstrated that coordinated, joint actions by State and DHS are critical for homeland and border security. State and DHS have worked together to roll out the biometric technology to consular posts worldwide on schedule. Moreover, their cooperation on US-VISIT will be critical to ensure that information is available to consulates to adjudicate visa applications and prevent persons from unlawfully entering the United States. However, they have not yet provided comprehensive guidance to the posts on how the program and biometric information should be used to adjudicate visas. We recognize that it may not be feasible for each post to implement biometric visas in the same way, given the variances among posts in workload, security concerns with the applicant pool, facilities, and personnel. However, guidance to posts on how to best implement the program, including best practices, would enable posts to develop operating procedures, identify resource needs, and implement mitigating actions to address the unique circumstances at each post. Therefore we have recommended that the Secretaries of Homeland Security and State develop and provide comprehensive guidance to consular posts on how best to implement the Biometric Visa Program. The guidance should address the planned uses for the information generated by the Biometric Visa Program at consular posts including directions to consular officers on when and how information from the IDENT database on visa applicants should be considered. Further, we have recommended that the Secretary of State direct consular posts to develop an implementation plan based on this guidance. DHS generally concurred with our recommendations, stating that GAO’s identification of areas where improvements are needed in the Biometric Visa Program will contribute to ongoing efforts to strengthen the visa process. State acknowledged that there may be a lag in guidance. Regarding US-VISIT, we made an earlier recommendation that the Secretary for Homeland Security clarify the operational context in which US-VISIT is to operate. DHS agreed with our recommendation and plans to issue the next version of their enterprise architecture in September of 2004. This is an essential component in establishing biometric policy and creating consistency between the DHS-run US-VISIT program and State’s Biometric Visa program. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions you or other members of the committee may have. For questions regarding this testimony, please call Jess Ford at (202) 512- 4128. Other key contributors to this statement include John Brummet, Sharron Candon, Deborah Davis, Kathryn Hartsburg, David Hinchman, and David Noone. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since September 11, 2001, the U.S. government has made a concerted effort to strengthen border security by enhancing visa issuance policies and procedures, as well as expanding screening of the millions of foreign visitors who enter the United States annually. Consistent with the 9/11 Commission report that recommends a biometric entry-exit screening system for travelers, the Department of State's biometric program complements the Department of Homeland Security's (DHS) United States Visitor and Immigrant Status Indicator Technology (US-VISIT) program--a governmentwide program to better control and monitor the entry, visa status, and exit of visitors. GAO was asked to present the findings of its report on State's Biometric Visa Program, as well as discuss other aspects of visa processing and border security that require coordinated, joint actions by State and DHS. Our report issued today finds that State is implementing the Biometric Visa Program on schedule and will likely meet the October 26, 2004, deadline for issuing visas that include biometric indicators, as mandated by Congress. As of September 1, 2004, State had installed program hardware and software at 201 visa issuing posts overseas and plans to complete the installation at the remaining 6 posts by September 30. Technology installation has progressed smoothly, however State and DHS have not provided comprehensive guidance to consular posts on when and how information from the DHS Automated Biometric Identification System (IDENT) on visa applicants should be considered by adjudicating consular officers. In the absence of such guidance, we found that these officers are unclear on how best to use the biometric program and IDENT information. Since September 11, State and DHS have made many improvements to visa issuance and border security policies. Nevertheless, in prior reports, we have found additional vulnerabilities that need to be addressed through joint, coordinated actions. For example, DHS has not adequately defined the operational context for US-VISIT, which affects the biometric program. In addition, we identified systemic weaknesses in information sharing between State and DHS in the visa revocation process. Moreover, we found related weaknesses in an interagency security check process aimed to prevent the illegal transfer of sensitive technologies. |
STEM fields include a wide range of disciplines and occupations, including agriculture, physics, psychology, medical technology, and automotive engineering. Many of these fields require completion of advanced courses in mathematics or science, subjects that are first introduced and developed at the kindergarten through 12th grade level. The federal government, universities and colleges, and other entities have taken steps to help improve achievement in these and other subjects through such actions as enforcement of NCLBA, which addresses both student and teacher performance at the elementary and secondary school levels, and implementation of programs to increase the numbers of women, minorities, and students with disadvantaged backgrounds in the STEM fields at postsecondary school levels and later in employment. The participation of domestic students in STEM fields—and in higher education more generally—is affected both by the economy and by demographic changes in the U.S. population. Enrollment in higher education has declined with upturns in the economy because of the increased opportunity costs of going to school when relatively high wages are available. The choice between academic programs is also affected by the wages expected to be earned after obtaining a degree. Demographic trends affect STEM fields because different races and ethnicities have had different enrollment rates, and their representation in the population is changing. In particular, STEM fields have had a relatively high proportion of white or Asian males, but the proportion of other minorities enrolled in the nation’s public schools, particularly Hispanics, has almost doubled since 1972. Furthermore, as of 2002, American Indians, Asians, African- Americans, Hispanics, and Pacific Islanders constituted 29 percent of all college students. Students and employees from foreign countries have pursued STEM degrees and worked in STEM occupations in the United States as well. To do so, these students and employees must obtain education or employment visas. Visas may not be issued to students for a number of reasons, including concerns that the visa applicant may engage in the illegal transfer of sensitive technology. Many foreign workers enter the United States annually through the H–1B visa program, which assists U.S. employers in temporarily filling specialty occupations. Employed workers may stay in the United States on an H–1B visa for up to 6 years, and the current cap on the number of H–1B visas that can be granted is 65,000. The law exempts certain workers from this cap, including those in specified positions or holding a master’s degree or higher from a U.S. institution. The federal government also plays a role in helping coordinate federal science and technology initiatives. The National Science and Technology Council (NSTC) was established in 1993 and is the principal means for the Administration to coordinate science and technology policies. One objective of NSTC is to establish clear national goals for federal science and technology investments in areas ranging from information technologies and health research to improving transportation systems and strengthening fundamental research. From the 1994–1995 academic year to the 2003–2004 academic year, the number of graduates with STEM degrees increased, but the proportion of students obtaining degrees in STEM fields fell. Teacher quality, academic preparation, collegiate degree requirements, and the pay for employment in STEM fields were cited by university officials and Education as factors affecting the pursuit of degrees in these fields. The number of graduates with degrees in STEM fields increased from approximately 519,000 to approximately 578,000 from the 1994–1995 academic year to the 2003–2004 academic year. However, during this same period, the number of graduates with degrees in non-STEM fields increased from about 1.1 million to 1.5 million. Thus, the percentage of students with STEM degrees decreased from about 32 percent to about 27 percent of total graduates. The largest increases at the bachelor’s and master’s levels were in mathematics and the computer sciences, and the largest increase at the doctoral level was in psychology. However, the overall number of students earning degrees in engineering decreased in this period, and the number of students earning doctoral degrees in the physical sciences and bachelor’s degrees in technology-related fields, as well as several other fields, also declined. Figure 1 shows the number of graduates for STEM and non-STEM fields in the 1994–1995 through 2003– 2004 academic years. From the 1994–1995 academic year to the 2002–2003 academic year, the proportion of women earning degrees in STEM fields increased at the bachelor’s, master’s, and doctoral levels, and the proportion of domestic minorities increased at the bachelor’s level. Conversely, the total number of men graduates decreased, and the proportion of men graduates declined in the majority of STEM fields at all educational levels in this same period. However, men continued to constitute over 50 percent of the graduates in most STEM fields. The proportion of domestic minorities increased at the bachelor’s level but did not change at the master’s or doctoral level. In the 1994–1995 and 2002–2003 academic years, international students earned about one-third or more of the degrees at both the master’s and doctoral levels in engineering, math and computer science, and the physical sciences. University officials told us and researchers reported that the quality of teachers in kindergarten through 12th grades and the levels of mathematics and science courses completed during high school affected students’ success in and decisions about pursuing STEM fields. University officials said that some teachers were unqualified and unable to impart the subject matter, causing students to lose interest in mathematics and science. In 2002, Education reported that, in the 1999–2000 school year, 45 percent of the high school students enrolled in biology/life science classes and approximately 30 percent of those enrolled in mathematics, English, and social science classes were instructed by teachers without a major, minor, or certification in these subjects—commonly referred to as “out-of-field” teachers. Also, states reported that the problem of underprepared teachers was worse on average in districts that serve large proportions of high-poverty children. In addition to teacher quality, students’ high school preparation in mathematics and science was cited by university officials and researchers as a factor that influenced students’ participation and success in the STEM fields. For example, university officials said that, because many students had not taken higher-level mathematics and science courses such as calculus and physics in high school, they were immediately behind other students. A study of several hundred students who had left the STEM fields reported that about 40 percent of those college students who left the science fields reported some problems related to high school science preparation. Several other factors were cited by university officials, students, and others as influencing decisions about participation in STEM fields. These factors included the relatively low pay in STEM occupations, additional tuition costs to obtain STEM degrees, and the availability of mentoring, especially for women and minorities, in the STEM fields. For example, officials from five universities told us that low pay in STEM occupations relative to other fields such as law and business dissuaded students from pursuing STEM degrees. Also, in a study that solicited the views of college students who left the STEM fields as well as those who continued to pursue STEM degrees, researchers found that students experienced greater financial difficulties in obtaining their degrees because of the extra time needed to obtain degrees in certain STEM fields. University officials, students, and other organizations suggested a number of steps that could be taken to encourage more participation in the STEM fields. University officials and students suggested more outreach, especially to women and minorities from kindergarten through the 12th grade. One organization, Building Engineering and Science Talent (BEST), suggested that research universities increase their presence in pre- kindergarten through 12th grade mathematics and science education in order to strengthen domestic students’ interests and abilities. In addition, the Council of Graduate Schools called for a renewed commitment to graduate education by the federal government through actions such as providing funds to support students trained at the doctoral level in the STEM fields and expanding participation in doctoral study in selected fields through graduate support awarded competitively to universities across the country. University officials suggested that the federal government could enhance its role in STEM education by providing more effective leadership through developing and implementing a national agenda for STEM education and increasing federal funding for academic research. Although the total number of STEM employees increased from 1994 to 2003, particularly in mathematics and computer science, there was no evidence that the number of employees in engineering and technology- related fields did. University officials, researchers, and others cited the availability of mentors as having a large influence on the decision to enter STEM fields and noted that many students with STEM degrees find employment in non-STEM fields. The number of foreign workers declined in STEM fields, in part because of declines in enrollment in U.S. programs resulting from difficulties with the U.S. visa system. Key factors affecting STEM employment decisions include the availability of mentors for women and minorities and opportunities abroad for foreign workers. From 1994 to 2003, employment in STEM fields increased from an estimated 7.2 million to an estimated 8.9 million—representing a 23 percent increase, as compared to a 17 percent increase in non-STEM fields. While the total number of STEM employees increased, this increase varied across STEM fields. Coinciding with the spread of the Internet and the personal computer, employment increased by an estimated 78 percent in the mathematics/computer sciences fields and by an estimated 20 percent in the sciences. There was no evidence that the number of employees in the engineering and technology-related fields increased. Further, a 2006 National Science Foundation report found that about two- thirds of employees with degrees in science or engineering were employed in fields somewhat or not at all related to their degree. Figure 2 shows the estimated number of employees in STEM fields. Women and minorities employed in STEM fields increased between 1994 and 2003, and the number of foreign workers declined. While the estimated number of women employees in STEM fields increased from about 2.7 million to about 3.5 million in this period, this did not result in a change in the proportion of women employees in the STEM fields relative to men. Specifically, women constituted an estimated 38 percent of the employees in STEM fields in 1994 and an estimated 39 percent in 2003, compared to 46 and 47 percent of the civilian labor force in 1994 and 2003, respectively. The estimated number of minorities employed in the STEM fields as well as the proportion of total STEM employees they constituted increased, but African-American and Hispanic employees remained underrepresented relative to their percentages in the civilian labor force. For example, in 2003, Hispanic employees constituted an estimated 10 percent of STEM employees compared to about 13 percent of the civilian labor force. Foreign workers traditionally had filled hundreds of thousands of positions, many in STEM fields, through the H–1B visa program. In recent years, these numbers have declined in certain fields. For example, the number of approvals for systems analysis/programming positions decreased from about 163,000 in 2001 to about 56,000 in 2002. University officials and congressional commissions noted the important role that mentors play in encouraging employment in STEM fields and that this was particularly important for women and minorities. One professor said that mentors helped students by advising them on the best track to follow for obtaining their degrees and achieving professional goals. In September 2000, a congressional commission reported that women were adversely affected throughout the STEM education pipeline and career path by a lack of role models and mentors. University officials and education policy experts told us that competition from other countries in educational or work opportunities and the more strict U.S. visa process since September 11, 2001, affected international employee decisions about studying and working in the United States. For example, university officials told us that students from several countries, including China and India, were being recruited by universities and employers in both their own countries and other countries as well as the United States. They also told us that they were also influenced by the perceived unwelcoming attitude of Americans and the complex visa process. GAO has reported on several aspects of the visa process and has made several recommendations for improving federal management of the process. In 2002, we cited the need for a clear policy on how to balance national security concerns with the desire to facilitate legitimate travel when issuing visas. In 2005, we reported a significant decline in certain visa processing times and in the number of cases pending more than 60 days, and we also reported that in some cases science students and scholars can obtain a visa within 24 hours. However, in 2006, we found that new policies and procedures since the September 11 attacks to strengthen the security of the visa process and other factors have resulted in applicants facing extensive wait times for visas at some consular posts. Officials from 13 federal civilian agencies reported spending about $2.8 billion in fiscal year 2004 for 207 education programs designed to support STEM fields, but they reported little about the effectiveness of these programs. Although evaluations had been done or were under way for about half of the programs, little is known about the extent to which most STEM programs are achieving their desired results. Furthermore, coordination among the federal STEM education programs has been limited. However, in 2003, the National Science and Technology Council formed a subcommittee to address STEM education and workforce policy issues across federal agencies, and Congress has introduced new STEM initiatives as well. Officials from 13 federal civilian agencies reported that approximately $2.8 billion was spent in fiscal year 2004 on 207 STEM education programs. The funding levels for STEM education programs among the agencies ranged from about $998 million for the National Institutes of Health to about $4.7 million for the Department of Homeland Security, and the numbers of programs ranged from 51 to 1 per agency, with two agencies— NIH and the National Science Foundation—administering nearly half of the programs. Most STEM education programs were funded at $5 million or less, but 13 programs were funded at more than $50 million, and the funding reported for individual programs varied significantly. For example, one Department of Agriculture-sponsored scholarship program for U.S. citizens seeking bachelor’s degrees at Hispanic-serving institutions was funded at $4,000, and one NIH grant program designed to develop and enhance research training opportunities was funded at about $547 million. Figure 3 shows the funding and number of STEM education programs by federal civilian agency. According to the agency responses to GAO’s survey, most STEM education programs had multiple goals, and one goal was to attract students or graduates to pursue STEM degrees and occupations. Many STEM programs also were designed to provide student research opportunities, provide support to educational institutions, or improve teacher training. In order to achieve these goals, many of the programs were targeted at multiple groups and provided financial assistance to multiple beneficiaries. STEM education programs most frequently provided financial support for students or scholars, and several programs provided assistance for teacher and faculty development as well. U.S. citizenship or permanent residence was required for the majority of programs. Table 1 presents the most frequent program goals and types of assistance provided. Agency officials reported that evaluations—which could play an important role in improving program operations and ensuring an efficient use of federal resources—had been completed or were under way for about half of the STEM education programs. However, evaluations had not been done for over 70 programs that were started before fiscal year 2002, including several that had been operating for over 15 years. For the remaining over 30 programs that were initially funded in fiscal year 2002 or later, it may have been too soon to expect evaluations. Coordination of federal STEM education programs has been limited. In January 2003, the National Science and Technology Council’s Committee on Science (COS) established a subcommittee on education and workforce development. According to its charter, the subcommittee is to address education and workforce policy issues and research and development efforts that focus on STEM education issues at all levels, as well as current and projected STEM workforce needs, trends, and issues. The subcommittee has working groups on (1) human capacity in STEM areas, (2) minority programs, (3) effective practices for assessing federal efforts, and (4) issues affecting graduate and postdoctoral researchers. NSTC reported that, as of June 2005, the subcommittee had a number of accomplishments and had other projects under way related to attracting students to STEM fields. For example, it had surveyed federal agency education programs designed to increase the participation of women and underrepresented minorities in STEM studies, and it had coordinated the Excellence in Science, Technology, Engineering, and Mathematics Education Week activities, which provide an opportunity for the nation’s schools to focus on improving mathematics and science education. In addition, the subcommittee is developing a Web site for federal educational resources in STEM fields and a set of principles that agencies could use in setting levels of support for graduate and postdoctoral fellowships and traineeships. In passing the Deficit Reduction Act of 2005, Congress created a new source of grant aid for students pursuing a major in the physical sciences, the life sciences, the computer sciences, mathematics, technology, engineering, or a foreign language considered critical to the national security of the United States. These National Science and Mathematics Access to Retain Talent Grants—or SMART Grants—provide up to $4,000 for each of 2 academic years for eligible students. Eligible students are those who are in their third or fourth academic year of a program of undergraduate education at a 4-year degree-granting institution, have maintained a cumulative grade point average of 3.0 or above, and meet the eligibility requirements of the federal government’s need-based Pell Grant program. Education expects to provide $790 million in SMART Grants to over 500,000 students in academic year 2006–2007. Congress also established an Academic Competitiveness Council in passing the Deficit Reduction Act of 2005. The council is composed of officials from federal agencies with responsibilities for managing existing federal programs that promote mathematics and science and is chaired by the Secretary of Education. Among the statutory duties of the council are to (1) identify all federal programs with a mathematics and science focus, (2) identify the target populations being served by such programs, (3) determine the effectiveness of such programs, (4) identify areas of overlap or duplication in such programs, and (5) recommend ways to efficiently integrate and coordinate such programs. Congress also charged the council to provide it with a report of its findings and recommendations by early 2007. In an April 2006 hearing before the House Committee on Education and the Workforce, the Secretary of Education testified that she and President Bush convened the first meeting of the council on March 6, 2006. While the total numbers of STEM graduates have increased, some fields have experienced declines, especially at the master’s and doctoral levels. Given the trends in the numbers and percentages of graduates with STEM degrees—particularly advanced degrees—and recent developments that have influenced international students’ decisions about pursuing degrees in the United States, it is uncertain whether the number of STEM graduates will be sufficient to meet future academic and employment needs and help the country maintain its technological competitive advantage. Moreover, although international graduate applications increased in academic year 2005–2006 for the first time in 3 years, it is too early to tell if this marks the end of declines in international graduate student enrollment. In terms of employment, despite some gains, the percentage of women in the STEM workforce has not changed significantly, minority employees remain underrepresented relative to their employment in the civilian labor force, and many graduates with degrees in STEM fields are not employed in STEM occupations. Women now outnumber men in college enrollment, and minority students are enrolling in record high levels at the postsecondary level as well. To the extent that these populations have been historically underrepresented in STEM fields, they provide a yet untapped source of STEM participation in the future. To help improve the trends in the numbers of graduates and employees in STEM fields, university officials and others made several suggestions, such as increasing the federal commitment to STEM education programs. However, before expanding the number of federal programs, it is important to know the extent to which existing STEM education programs are appropriately targeted and making the best use of available federal resources—in other words, these programs must be evaluated—and a comprehensive evaluation of federal programs is currently nonexistent. Furthermore, the recent initiatives to improve federal coordination, such as the American Competitiveness Council, serve as an initial step in reducing unnecessary overlap between programs, not an ending point. In an era of limited financial resources and growing federal deficits, information about the effectiveness of these programs can help guide policymakers and program managers in coordinating and improving existing programs as well as determining areas in which new programs are needed. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the Committee may have. For further contacts regarding this testimony, please call Cornelia M. Ashby at (202) 512–7215. Individuals making key contributions to this testimony include Jeff Appel (Assistant Director), Jeff Weinstein (Analyst- in-Charge), Carolyn Taylor, Tim Hall, Mark Ward, John Mingus, and Katharine Leavitt. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The United States is a world leader in scientific and technological innovation. To help maintain this advantage, the federal government has spent billions of dollars on education programs in the science, technology, engineering, and mathematics (STEM) fields for many years. However, concerns have been raised about the nation's ability to maintain its global technological competitive advantage in the future. This testimony is based on our October 2005 report and presents information on (1) trends in degree attainment in STEM- and non-STEM-related fields and factors that may influence these trends, (2) trends in the levels of employment in STEM- and non-STEM- related fields and factors that may influence these trends, and (3) federal education programs intended to support the study of and employment in STEM-related fields. For this report, we analyzed survey responses from 13 civilian federal departments and agencies; analyzed data from the Departments of Education and Labor; interviewed educators, federal agency officials, and representatives from education associations and organizations; and interviewed students. While postsecondary enrollment has increased over the past decade, the proportion of students obtaining degrees in STEM fields has fallen. In academic year 1994-1995, about 519,000 students (32 percent) obtained STEM degrees. About 578,000 students obtained STEM degrees in academic year 2003-2004, accounting for 27 percent of degrees awarded. Despite increases in enrollment and degree attainment by women and minorities at the graduate level, the number of graduate degrees conferred fell in several STEM-related fields from academic year 1994-1995 to academic year 2003-2004. College and university officials and students most often cited subpar teacher quality and poor high school preparation as factors that discouraged the pursuit of STEM degrees. Suggestions to encourage more enrollment in STEM fields include increased outreach and mentoring. The past decade has seen an increase in STEM employees, particularly in mathematics and computer science. From 1994 to 2003, employment in STEM fields increased by an estimated 23 percent, compared to 17 percent in non-STEM fields. Mathematics and computer science showed the highest increase in STEM-related employment, and employment in science-related fields increased as well. However, in certain STEM fields, including engineering, the number of employees did not increase significantly. Further, while the estimated number of women, African-Americans, and Hispanic-Americans employed in STEM fields increased, women and minorities remained underrepresented relative to their numbers in the civilian labor force. The number of foreign workers employed in the United States has fluctuated, experiencing declines in 2002 and 2003. Key factors affecting STEM employment decisions include mentoring for women and minorities and opportunities abroad for foreign employees. Thirteen federal civilian agencies spent approximately $2.8 billion in fiscal year 2004 to fund over 200 programs designed to increase the numbers of students in STEM fields and employees in STEM occupations and to improve related educational programs. The funding reported for individual STEM education programs varied significantly, and programs most commonly provided financial support to students or infrastructure support to institutions. However, only half of these programs had been evaluated or had evaluations underway, and coordination among STEM education programs was limited. It is important to know the extent to which existing STEM education programs target the right people and the right areas and make the best use of available resources. Since our report was issued in October 2005, Congress, in addition to establishing new grants to encourage students from low-income families to enroll in STEM fields, established an Academic Competitiveness Council to identify, evaluate, coordinate, and improve federal STEM programs. |
Numerous international and domestic organizations play a role in the security of maritime energy commodities. The list of stakeholders outside the United States is quite diverse. They include international organizations, governments of nations where tankers load or where tankers are registered, and owners and operators of tankers or facilities (see table 1). On the domestic side, the U.S. Coast Guard is the lead federal agency and is responsible for a wide array of maritime safety and security activities. Other U.S. government agencies support the Coast Guard’s maritime security mission by addressing a wide range of issues that affect the flow of cargo and people into the United States. State and local governments and the private sector also have responsibilities to secure domestic ports. Table 2 lists key federal agencies and other stakeholders on the domestic side, together with examples of the kinds of maritime security activities performed. All of these international and domestic stakeholders help to ensure the safety and security of a global supply chain that brings energy commodities to the United States. This supply chain spans the globe and reaches many regions of the world. Each day, the United States imports many different energy commodities from overseas suppliers in Africa, Europe, the Middle East, and North and South America. Excluding Canada, which supplies petroleum and natural gas to the United States via pipeline, the vast majority of these varied imports arrive by tanker. The various types of energy commodities require different handling methods, and as a result, various kinds of tankers have been built to accommodate them. An LNG carrier is designed for transporting LNG at minus 260 degrees Fahrenheit, when gas liquefies and shrinks drastically in volume. The cargo is transported in special tanks insulated to minimize evaporation. LNG carriers are up to 1,000 feet long and have a draft (depth below the water line) of 40 feet when fully loaded. The global LNG fleet is expected to double from 200 in 2006 to over 400 by 2010. According to industry reports, the existing fleet has completed more than 33,000 voyages without a substantial spill. Oil tankers are more numerous and vary greatly in size. Tankers transporting crude oil from the Middle East generally consist of Very Large Crude Carriers, which typically carry more than 2 million barrels of oil per voyage. These ships are over 1,000 feet long, nearly 200 feet wide, and have a draft of over 65 feet. Figure 1 shows a typical Very Large Crude Carrier. These ships are too big for most U.S. ports and must transfer their loads to smaller tankers (a process called lightering) or unload at an offshore terminal. At present, the United States has only one such offshore terminal—the Louisiana Offshore Oil Port (LOOP). Most tankers transporting cargos from the Caribbean and South America, by contrast, are smaller than Very Large Crude Carriers and can enter U.S. ports directly. There are generally two enforcement systems aimed at ensuring that these vessels are in compliance with applicable regulations, laws, and conventions: flag state control and port state control. The flag state is the country in which the vessel is registered. Flag state control can extend anywhere in the world where the vessel operates. For example, a flag state’s requirements set the standards for the operation and maintenance of all vessels flying that flag. If the flag state is a contracting government to the SOLAS Convention, these standards are required to be at least as stringent as those included in the ISPS Code. The port state is the country where the port is located. Port state control is the process by which a nation exercises its authority over foreign-flagged vessels operating in waters subject to its jurisdiction. It is intended to ensure that vessels comply with all domestic requirements for ensuring safety of the port, environment, and personnel. Thus, when a foreign-flagged oil tanker enters a U.S. port, the U.S. port state control program, administered by the U.S. Coast Guard, becomes the primary means of marine safety enforcement. For example, the Oil Pollution Act of 1990 requires that all tankers built after 1994 coming to the United States must have double hulls—that is, a two-layered hull to help prevent spills resulting from a collision or grounding (see fig. 2). According to the Energy Information Administration, the United States consumes more than 20 million barrels of petroleum every day. Of that amount, over 65 percent comes from foreign sources. The top suppliers of crude oil and petroleum products to the United States in 2005 were Canada, Mexico, Saudi Arabia, Venezuela, and Nigeria—each supplying over 1 million barrels of petroleum per day (see fig. 3). Iraq, Algeria, Angola, Russia, and the United Kingdom are also major energy suppliers with daily imports to the United States of up to 500,000 barrels per day. These top 10 energy suppliers accounted for approximately 75 percent of all U.S. petroleum imports in 2005. All petroleum imports to the United States from those countries arrive on tankers, except those from Canada. Imports are a growing portion of the natural gas supply in the United States. With consumption of natural gas growing faster than domestic production, imports of natural gas will almost certainly continue to rise, according to the Energy Information Administration. Today, Canada is the primary supplier of natural gas to the United States and all of natural gas imports from Canada are carried by pipeline. Approximately 3 percent of all natural gas imports to the United States is LNG. Trinidad and Tobago is the single largest supplier of LNG to the United States, supplying 70 percent of all LNG imported into this country (see fig. 4). Other LNG suppliers in 2005 included Algeria, Egypt, Malaysia, Nigeria, Qatar, and Oman. The United States imports about 65 percent of its crude oil and petroleum products as well as about 3 percent of its natural gas needs. As shown in figure 5, certain energy commodities are imported into particular regions of the country. Appendix II provides detailed descriptions of U.S. energy commodity imports transported by tanker. For example, in 2004: Ports along the Gulf Coast imported 62 percent of the crude oil imported to the United States. Ports along the East Coast imported 95 percent of the gasoline and 75 percent of the LNG. Ports along the West Coast imported 60 percent of all jet fuel. The global maritime environment through which the energy supply chain operates is constrained by physical geography and influenced by regional political dynamics. The physical geography of the continents, for example, forces shipping lanes to pass through certain narrow channels, or chokepoints. There are approximately 200 such locations, but only a handful are of strategic importance for the global energy supply (see fig. 6). A chokepoint by definition tends to be shallow and narrow, resulting in impaired navigation and congestion from other tankers, cargo ships, and other smaller vessels, which can impede the free and efficient flow of goods. Moreover, several key chokepoints are surrounded by more than one sovereign nation, resulting in a complex security environment within a constrained physical space. Managing security in this environment requires significant coordination among these countries to successfully manage the security in these locations. According to the Energy Information Administration, chokepoints are susceptible to pirate attacks and shipping accidents in their narrow channels. In addition, chokepoints can be blocked, mined, or rendered inaccessible by foreign naval forces, with potentially devastating consequences for the flow of oil and goods around the world and into the United States. The Straits of Hormuz and Malacca are two critical maritime shipping chokepoints that tankers pass through regularly. The Strait of Hormuz, which connects the oil fields of the Persian Gulf with the Gulf of Oman and the Indian Ocean, is the most important chokepoint in the world in terms of the global energy supply, with about 20 percent of the world oil supply, including 17 percent of U.S. petroleum imports passing through it. Tankers with oil from the Persian Gulf must navigate through this chokepoint in order to access the principal international shipping lanes toward the United States. Another chokepoint, the Strait of Malacca, links the Andaman Sea and the Indian Ocean (and oil coming from the Middle East) with the South China Sea and the Pacific Ocean (and major consuming markets in Asia). The Strait of Malacca is located among Malaysia, Indonesia, and Singapore and about 600 vessels pass through it each day. Piracy and political instability in the region, especially in Indonesia, are issues of concern for shipping operations in the strait. The Energy Information Administration identified other important maritime chokepoints, including the Bab el-Mandab passage from the Arabian Sea, the Panama Canal connecting the Pacific and Atlantic Oceans, the Suez Canal connecting the Red Sea to the Mediterranean Sea, and the Bosporus Straits linking the Black Sea to the Mediterranean Sea. Besides facing vulnerabilities while in transit, vessels can be vulnerable while moored at facilities where they are receiving or unloading their cargoes, and the energy-related infrastructure located in ports can also be vulnerable to attack. Vessels transiting into and out of ports and their attendant infrastructure can be vulnerable in a number of ways. During transit into and out of port, these vessels travel slowly, which increases their exposure. Tankers follow timetables that are easy to track in advance and they follow a fixed set of maritime routes. Once tankers arrive in this country, they must wait offshore for pilots to navigate the ship channels into many of the nation’s ports. Since the terrorist attacks of September 11, increased national attention has been focused on the potential vulnerability of the nation’s 361 major seaports to terrorist attack. According to the National Strategy for Maritime Security, the infrastructure and systems that span the maritime domain have increasingly become both targets of and potential conveyances for dangerous and illicit activities. GAO has previously reported that ports are vulnerable because they are sprawling, interwoven with complex transportation networks, close to crowded metropolitan areas, and easily accessible. Ports and their maritime approaches, including waterways and coastal areas, facilitate freedom of movement and the flow of goods while allowing people, cargo, and vessels to transit with relative anonymity. Some energy terminals are located in open seas where they are accessible by water or air, while others are located in metropolitan areas, along key shipping channels, or near pristine environmental sanctuaries where they may be accessible by water, air, or land. In the wake of the terrorist attacks of September 11, 2001, there was widespread acknowledgement that numerous and substantial gaps existed in homeland security. There is also widespread acknowledgment, however, that resources for closing these gaps are limited and must compete with other national priorities. It is improbable that any security framework can successfully anticipate and thwart every type of potential terrorist threat that highly motivated, well-skilled, and adequately funded terrorist groups could perpetrate. While security efforts clearly matter, various groups like the 9/11 Commission have emphasized that total security cannot be bought no matter how much is spent on it. In short, the nation cannot afford to protect everything against all threats, even within the relatively narrow context of tanker security. Choices are clearly involved—including decisions about the relative vulnerability posed by attacks on energy commodity tankers as compared with attacks in other forms, such as air safety or security in crowded urban centers. In this context, risk management has become a widely endorsed strategy for helping policymakers make decisions about allocating finite resources in such circumstances. It emphasizes the importance of assigning available resources to address the greatest risks, along with selecting those strategies that make the most efficient and effective use of resources. Risk management has received widespread support from Congress, the President, and the Secretary of Homeland Security as a tool that can help set priorities and inform decisions about mitigating risks. Even though intelligence sources have reported that there are currently no specific credible threats to energy tankers in U.S. waters or their attendant facilities on U.S. soil, attacks overseas show that tankers face several major types of threats, and if a threat were to be successfully carried out domestically, it could have serious consequences. Overseas, terrorists have demonstrated the ability to carry out at least three types of threats. First, and of greatest concern, according to officials we spoke with, is a suicide attack against a tanker or attendant facility. Second is a standoff missile attack using a rocket or some other weapon launched from a distance. Third is an armed assault by terrorists or armed bands while a tanker is moored or in transit. There are additional types of threats, including internal crew conspiracies and collisions with a vessel piloted by terrorists. While attacks have so far occurred only overseas, two Coast Guard admirals testified before Congress that malicious maritime incursions into U.S. waters, such as immigrant or drug smuggling, occur regularly. If an attack on a commodity tanker were successful in U.S. waters or while docked at a U.S. unloading facility, substantial public safety, environmental, and economic consequences could result. Public safety and environmental consequences of an attack vary by commodity. For instance, LNG and LPG are highly combustible and pose a risk to public safety of fire or—in a more unlikely scenario in which they are in a confined space—explosion. The environmental impact, however, of LNG and LPG spills would be minimal since they dissipate in a short period of time. Crude oil and heavy petroleum products remain in the environment after they are spilled and must be removed, potentially causing significant environmental damage. Potential economic consequences of an attack include psychological market responses as well as significant delays and possible shortages if major transit routes, key facilities, or ports are closed. According to U.S. government intelligence sources, there have been no specific credible terrorist threats to tankers in U.S. waters or their unloading facilities on U.S. soil in the wake of the September 11 attacks. Nonetheless, several events overseas and intelligence reports indicate ongoing concern about the potential for an attack against tankers or energy facilities. Heightened security threat levels in response to potential threats. The Coast Guard has raised the Maritime Security (MARSEC) level from Level 1 to Level 2 on several occasions in response to nonspecific threats based on intelligence or other warnings to the maritime sector. In the past, the Coast Guard has raised the MARSEC level due to general threats. Other intelligence indicating ports are targets under consideration. Security officials in the U.S. government are concerned about the possibility of a terrorist attack in a U.S. port in the future. For example, captured terrorist training manuals cite seaports as targets and instruct trainees to use covert means to obtain surveillance information for use in attack planning. Terrorist leaders have also stated their intent to attack infrastructure targets within the United States, including seaports, in an effort to cause physical and economic damage, and inflict mass casualties. Continued policy priority for port security. Four years after passage of the Maritime Transportation Security Act of 2002, Congress remained sufficiently concerned about maritime security to again increase security efforts under the Security and Accountability Act for Every (SAFE) Port Act of 2006. This law (1) required the Department of Homeland Security to conduct terrorist watch list checks of newly hired port employees, (2) provided authority for risk-based funding through security grants to harden U.S. ports against terrorist attacks and enhance capabilities to respond to attacks and resume operations, and (3) required the Department of Homeland Security to develop protocols for resuming trade after a transportation security incident. Our discussions with officials of various agencies and our review of reports and other published documentation indicate that the following three types of attacks on tankers or attendant facilities are considered to be the most likely. In the maritime domain, suicide attacks have been carried out using a small, explosive-laden boat or vehicle that the attacker rams into a tanker or energy facility. The intent of such an attack is maximum damage to human or physical targets without concern for the life of the attacker. Previous attack history underscores terrorist intentions and capability to use small boat attacks. Moreover, intelligence experts say that the suicide boat attack uses a proven, simple strategy that has caused significant loss of life and significant damage to commercial and military vessels. Several suicide attacks have been carried out against tankers and energy infrastructure in the Persian Gulf region. They have taken place in restricted waterways where a ship’s ability to maneuver or engage the attackers is hampered or when a ship has stopped or moored. For example: In April 2004 terrorists attacked the Al-Basrah and Khawr Al’Amaya offshore oil terminals in Iraq using vessels packed with explosives. Several oil tankers were either docked at or in the vicinity of the offshore terminals during the attack. Even though the speedboats detonated prematurely and missed striking the oil tankers and the offshore terminals, another small craft near the Khawr Al’Amaya terminal exploded when coalition forces attempted to intercept it, killing two U.S. Navy sailors and a U.S. Coast Guardsman. According to a recent study on maritime terrorism, the coordinated attack appears to have been part of an overall terrorist strategy to destabilize Iraq, and both terminals were shut down for 2 days, resulting in lost revenue of nearly $40 million. Another suicide attack occurred in October 2002 when terrorists rammed the French supertanker Limburg as it slowed for a pilot to approach the Ash Shihr Terminal off the coast of Yemen. (See fig. 7.) The resulting explosion breached the Limburg’s double hull and ignited stored oil on board the vessel. An estimated 90,000 barrels of oil were spilled, 1 crewman was killed, and 17 were injured. In addition to maritime suicide attacks, terrorists have also targeted energy facilities on land. In February 2006, for example, terrorists attempted to drive vehicles packed with explosives through the gates of a major oil-processing facility in Saudi Arabia’s eastern province. Al Qaeda claimed responsibility for the attack, which killed two Saudi guards and represented the first direct assault on a Saudi oil production facility. A second type of threat against tankers and attendant maritime infrastructure is a standoff missile attack using a rocket, mortar, or rocket- propelled grenade launched from a sufficient distance to evade defensive fire. Standoff missile attacks have been aimed at military ships in ports in the Persian Gulf, but these kinds of attacks also represent a serious type of threat against tankers. Terrorists launched such an attack using Katyusha rockets in 2005, narrowly missing two U.S. naval ships moored at a Jordanian port. Compared to suicide attacks, standoff attacks are easier to execute, but are less likely to be as effective, according to intelligence experts. The range, size, and accuracy of explosive projectiles used in such an attack could vary considerably. Armed assaults, particularly at critical shipping chokepoints, represent a third major type of threat to tankers along the energy supply chain, according to the International Maritime Bureau. These attacks on tankers and energy infrastructure have taken place where maritime security is lacking and they have been carried out in most cases by pirates seeking to gain control of the ship for financial gain, including petty theft and kidnapping of crew for ransom. Pirate attacks against tankers and cargo ships have taken place in numerous locations, including off the coast of Somalia, in the Gulf of Guinea and Persian Gulf, and along the Strait of Malacca. According to officials at the International Maritime Bureau, oil tankers account for about one-quarter of all pirate attacks. Pirate groups armed with automatic weapons have seized tankers in the Strait of Malacca and off the coast of Somalia. For example, in March 2006 pirates armed with automatic weapons hijacked a tanker off the coast of Somalia and demanded ransom payments for the release of the ship and its crew. Also, attacks on offshore oil facilities have become commonplace in Nigeria, where local rebel groups claim to be fighting the Nigerian government over control of oil revenue. While no attacks on international oil tankers off the coast of Nigeria have occurred to date, militant groups in the area have threatened to escalate the conflict by attacking ships. There are other types of threats besides the three above, but assessments we reviewed and officials we met with indicated these other scenarios were less likely to occur. Two examples cited were the following: Crew conspiracies. Coast Guard intelligence reports suggest a hypothetical possibility that crew members (or persons posing as crew members) could conspire to commandeer a tanker with the intent of using the vessel as a weapon or disrupting maritime commerce. Vessel operators and industry groups do not consider this to be a serious threat, especially given the technical complexity of modern gas carriers and large oil tankers and the extensive vetting process for crew on these kinds of vessels. Crew conspiracy could also result in situations where oil tankers or gas carriers could be used to transport terrorists. Intelligence officials estimate that the number of overall stowaways on all vessels entering U.S. ports was expected to average 30 per month in 2005. There have been cases of stowaways with suspected terrorist connections on board U.S.-bound vessels since 2000. Collisions. One scenario related to armed assaults involves pirates or terrorists hijacking a large ship and ramming it into a tanker, an energy facility, or critical infrastructure such as a bridge. Although such scenarios require gaining control of a ship, terrorists’ successful takeover of aircraft in the September 11 attacks demonstrate that such plans could be feasible. To date, there have been no known cases of terrorists intentionally using a vessel as a weapon, but there have been some close calls in pirate-prone areas. Security experts point to an example in 2003 in which a group of pirates gained control of the chemical tanker Dewi Madrim in the Strait of Malacca. Once at the tanker’s helm, the pirates altered the ship’s speed, disabled communications, and steered the ship for over 1 hour before escaping with equipment and technical documents. Reports we reviewed and assessments we received indicate that the threat of seaborne terrorist attack on maritime energy tankers and infrastructure is likely to persist. The information we reviewed and discussions we had with agency officials indicate the greatest degree of concern remains overseas. For example, in October 2006 it was reported that there were threats against Saudi Arabia’s Ras Tanura oil terminal, which is the world’s biggest offshore oil facility, as well as a refinery in Bahrain. As part of its mission in the area, the U.S. Navy, together with coalition forces, continues to patrol areas containing critical maritime energy infrastructure to ensure their security, and works with regional navies in the Persian Gulf to improve their ability to enforce maritime security. In addition, Coast Guard maritime threat assessments we reviewed consider the threat of terrorists attacking vessels outside U.S. territorial waters to be significant. According to these reports, future maritime terrorist attacks are most likely to occur in the Persian Gulf, Red Sea, Mediterranean Sea, and Southeast Asia. Domestically, intelligence reports and other assessments continue to disclose incidents that demonstrate the need for continued concern about potential terrorist threats. For example, two Coast Guard admirals testified that the nation is subject to an estimated four malicious maritime incursions around the country each week. These incursions represent opportunities to infiltrate homeland security and could cause widespread human, economic, and environmental damage in our nation’s maritime points of entry. Most of these incursions to date have involved vessels bringing illegal immigrants, drugs, or other contraband into the country. A successful attack on an energy commodity tanker could have substantial public safety, environmental, and economic consequences. Public safety and environmental consequences vary by commodity. LNG and LPG are highly combustible and pose a risk to public safety of fire and explosions, but their environmental impact would be minimal since they dissipate in a short period of time. Crude oil and heavy petroleum products do not dissipate quickly and must be removed from the water, posing a greater environmental than public safety risk. Economic consequences of an attack could be substantial, not so much because of the loss of a tanker or its cargo, but because of the greater shock to the economy, particularly if major transit routes, key facilities, or ports are closed. Price spikes that reflect fears or expectations about the price and supply of energy commodities could also be significant. LNG and LPG spills pose primarily a public safety hazard to structures and people because of the potential for fires and explosions. These gaseous energy commodities are transported as liquids either by cooling or by pressurizing the gas. If spilled, they will return to their gaseous state, causing vapor to form above the spill. It is these vapors that will burn. Further, the vapors will drift away from the site of the spill if not immediately ignited by a source such as an open flame or strong static charge. Once ignited, the fire will travel back through the vapors toward the initial spill site and, if fuel remains, continue to burn near the tanker. One of the key elements of how a fire will affect the public is the amount of heat that is radiated away from the fire. The amount of heat radiated away from a fire is related to how smoky the fire burns—fires with a great deal of smoke radiate much less heat because the dark smoke absorbs the radiation. LNG and LPG vapor fires burn very cleanly, with little smoke, and thus emit more heat than light petroleum product or crude oil fires. Besides the danger of fire, there is also a danger of explosions if LNG or LPG vapors are ignited in a confined area, such as under a dock. If the attack on a tanker occurred in a congested port area, an explosion could damage infrastructure or harm people located nearby. In addition to potential explosions of confined vapors, a particular type of explosion— called a boiling-liquid-expanding-vapor explosion—can occur on tankers that carry pressurized cargoes, such as some LPG tankers. In these tankers, the individual tanks carrying the LPG may rupture violently if they are compromised by heat or explosion. Since LNG is not transported in pressurized tanks, this type of explosion is not likely to occur. Finally, people who come in contact with spilled refrigerated liquefied gases could be burned due to the cryogenic (freeze) nature of the liquid. LNG and LPG are both transported internationally in refrigerated tankers that keep the gas so cold that it retains a liquid form. A spill of either LNG or LPG could expose people close to the spill to the cold liquid and cause cryogenic burns or frostbite. This is not likely to affect the public, but could affect the crew on the tanker or other people located close to the tanker. LNG and LPG spills pose little threat to the environment because they almost entirely vaporize in a matter of minutes or hours and disperse into the atmosphere. If an LNG or LPG spill were ignited, there could be localized impacts on wildlife near the fire, but few other environmental effects. Spills of light petroleum products, such as gasoline, diesel, and jet fuel, can have both public safety and environmental consequences. Light petroleum products produce flammable vapors when they are spilled. These vapors can be ignited and could result in large, damaging fires. Further, the vapors could drift away from the site of the spill if not immediately ignited by a source such as an open flame or strong static charge. Once ignited, the fire will travel back through the vapors toward the initial spill site and, if fuel remains, continue to burn near the tanker. Besides the danger of fire, there is also a danger of explosions if light petroleum product vapors are ignited in a confined area, such as under a dock. If the attack on a tanker occurred in a congested port area, an explosion could damage infrastructure or harm people located nearby. Spills of light petroleum products have varying environmental impacts, depending on conditions. Light petroleum products evaporate—almost all of the spill can evaporate in a few hours or up to a day. Consequently, light petroleum products generally do not persist in the environment for long unless the spill is churned by significant wave action. In that case, such products can mix with water and will linger in the environment for much longer periods of time. A 1996 spill highlighted the damage that can occur when a light distillate oil is spilled in heavy wave conditions, resulting in much of the oil mixing with water rather than evaporating. In this case, a tank barge carrying home heating oil was grounded in the middle of a storm near Point Judith, Rhode Island, spilling approximately 20,000 barrels of heating oil. An estimated 80 percent of the release was mixed into the water, with only about 12 percent evaporating and about 10 percent staying on the surface of the water. The spill affected animals and plants living on the sea bed, with an estimated mortality of 9 million lobsters, 19.4 million clams, 7.6 million rock and hermit crabs, and 4.2 million fish. The oil spill resulted in a fishing closure for about 250 square miles in Block Island Sound for a period of 5 months. Spills of crude oil and heavy petroleum products could result in significant environmental consequences. Since these types of spills do not readily evaporate, they can linger in the environment. Environmental cleanup of crude oil and heavy petroleum product spills can take several years and in some cases cost billions of dollars. According to ExxonMobil, the company spent $2.2 billion on the Exxon Valdez cleanup. Crude oil and heavy petroleum products can mix with water, particularly in the presence of waves, causing small drops of water to be trapped inside the spilled oil. This is called an emulsion and can hamper cleanup by making the spilled oil difficult to skim off the water. This will greatly increase the volume of the spill, since the water trapped within the oil also has to be removed. In addition, residual oils are sometimes more dense than water, allowing them to sink and contaminate bottom sediments. Finally, crude oil and heavy petroleum products can coat birds and marine mammals, both smothering the organisms and exposing them to them to hypothermia as their feathers and fur lose the ability to insulate. While crude oil and heavy petroleum products evaporate, they produce few flammable vapors. For instance, less than half of a crude oil spill and 10 percent of heavy petroleum product spills will evaporate into vapors that could burn or explode. While fire always raises concerns about public safety, the smaller volume of vapors available to burn would result in small fires that are less likely to endanger the public. Although the Exxon Valdez accident demonstrates that even one spill can create substantial environmental cost, an attack that affects only a single tanker is unlikely to have significant consequences on the overall economy, other than a relative short-term market price increase. One tanker carries a small percentage of the total daily demand for a commodity. As mentioned above, Very Large Crude Carriers typically carry more than 2 million barrels of oil per voyage, which is about 10 percent of U.S. daily oil consumption. In most cases, the relatively small volume in an individual tanker could be replaced with other imports or from domestic storage. Two examples show the relatively small effect on supply if the broader supply network is not substantially affected: The approximately 240,000 barrels of oil released into Prince William Sound by the Exxon Valdez represented about 20 minutes of total U.S. oil consumption in 1989. The spill’s actual disruption was somewhat greater: According to the Department of Energy, the incident actually resulted in an oil supply disruption of 13 million barrels of oil over 13 days, because the spill restricted tanker transport in Prince William Sound and the volume of oil piped from the Alaskan North Slope also had to be reduced. Still, even this 13 million barrel disruption represented only about 18 hours of total national consumption. More recently, an approximately 6,300-barrel oil spill in November 2004 significantly reduced tanker traffic on a stretch of the Delaware River for more than a week. As a result, a nearby refinery had to reduce production of refined products because of reduced crude oil availability. The oil spill also threatened to contaminate the water intake system of a nuclear power plant along the river, which was temporarily shut down. Despite these reductions in energy supply, gasoline prices actually dropped in the days after the oil spill. The loss of a tanker carrying crude oil or heavy petroleum commodities will pose additional economic costs for ship replacement and environmental cleanup. Tankers can cost about $150 million, and the lost cargo could cost over $100 million dollars more. The Delaware River oil spill cleanup cost about $175 million over the course of 1 year. As the $2.2 billion Exxon Valdez spill cleanup illustrates, a larger spill or a spill in a more sensitive ecological zone could cost much more. A much more significant impact could occur if an attack on a tanker resulted in the closure of a port, damage to a key facility, or long interruption of a key transit route. A successful attack while a tanker was docked, for example, could result in damage to a key facility. Even if a port were not closed altogether, the Coast Guard could increase the MARSEC level at one or more ports or industries to MARSEC 3—the highest level. The Coast Guard noted in the Federal Register that MARSEC Level 3 will involve significant restriction of maritime operations that could result in the temporary closure of individual facilities, ports, and waterways, in either a region or the entire nation. Depending on the nature of the specific threat, this highest level of maritime security may have a considerable impact on the stakeholders in the affected ports or maritime areas. The ability to estimate the costs to business and government for even a short period at MARSEC Level 3 is difficult to do with any level of accuracy or analytical confidence due to the infinite range of threats and scenarios that could trigger MARSEC Level 3. The Coast Guard also noted that the length and the duration of the increased security level to MARSEC Level 3 will be entirely dependent on the scope of transportation security incidents or disasters that have already occurred. The Coast Guard expects MARSEC Level 3 to increase the direct costs to businesses attributable to increased personnel or modified operations, and it also expects indirect costs to society of the ‘‘ripple effects’’ associated with sustained port closures would greatly outweigh the direct costs to individual businesses. The scale of these effects can perhaps be seen in several hypothetical examples, both international and domestic. Strait of Hormuz. Each day, tankers transport 20 percent of global daily oil consumption—about 17 million barrels of oil—through the Strait of Hormuz, the narrow waterway that connects the Persian Gulf with the Arabian Sea. While there are some limited alternatives for exporting oil from the Persian Gulf without going through the strait, these alternatives could not make entirely for the amount of oil lost by closure of the strait. While the United States and other oil-importing countries have reserves of crude oil that they could use to mitigate the loss of supply from the Persian Gulf, oil could not be withdrawn fast enough to entirely make up the lost volumes. For example, while the U.S. Strategic Petroleum Reserve has 688 million barrels of oil, the send-out capacity of the reserves is only 4.4 million barrels per day. Other countries face similar constraints. Additionally, if closure of Hormuz lasted for an extended period of time, strategic reserves could run out or become so low as to be unable to mitigate any additional petroleum supply disruptions. Northeast United States. An attack on a key port in the northeastern United States, such as Boston, could result in energy commodity shortages or price spikes. For instance, the LNG facility near Boston (in Everett, Massachusetts), is the only facility importing liquefied natural gas in the Northeast. LNG is very important to the Northeast during heating season because natural gas movement into the Northeast is constrained during the winter because existing pipelines to New England are fully utilized. A report prepared by the Power Planning Committee of the New England Governor’s Conference, Inc., concluded that if LNG from the Everett facility and satellite operations elsewhere in the region is not available on a peak winter day, the region could have insufficient gas supply to meet the needs of all customers for space heating and some key electric generators. An attack that damages the Everett LNG facility during a cold winter could result in natural gas shortages or price spikes. LOOP. A loss of import capacity at the LOOP could increase the price of crude oil and refined products. LOOP is a key energy facility—a terminal in the Gulf of Mexico that, according to DOE, accounts for more than 10 percent of total U.S. crude oil imports. LOOP and its storage terminals are connected to more than 50 percent of the refining capacity in the United States. LOOP is also the only facility in the United States that can receive tankers of the ultra-large and very large types. Counteracting the impact of losing LOOP could involve release of oil from the U.S. Strategic Petroleum Reserve and lightering in other U.S. ports. While we did not find any studies on the economic consequences of closures to energy facilities at ports, other broader reviews of port closures identified possible loses in the billions of dollars. One study of the 2002 West Coast port shutdown, a 11-day closure of all West Coast ports due to a labor dispute, developed estimates (based on models) for the costs of the shutdown based on the losses in income by U.S. workers, consumers, and producers based on trade flow, ability to ship goods, and the inclination of consumers and industries to substitute for other, available goods. The study found that for a shutdown lasting 4 weeks (which was longer than the actual 11-day shutdown) total loses to the U.S. economy would be about $4.7 billion, with industrial consumers bearing the majority of that burden. Other studies have attempted to model the economic impact of terrorist attacks on ports. For example, one study examined the potential effects of a 15-day port closure at Los Angeles-Long Beach due to a radiological bomb. It concluded that such a closure would result in regional impacts of $138 million in lost economic output and 1,258 person-years of lost employment. The study also analyzed the potential effects of a simultaneous attack on key bridges in the port area. The study assumed such an attack would cause a longer port closure and limited truck access to the port for 120 days, and under that scenario, it estimated the national economic impact at $34 billion and 212,000 person-years of employment lost. This analysis did not consider the potential mitigating effects of other modes of transportation for moving goods out of the port (i.e., using rail instead of trucks), or potential trade diversion to other ports during the crisis. Finally, psychological ramifications of an attack could affect prices and supply. Researchers have noted that psychological market reactions to the consequences of an event may cause individuals and firms to change their decision-making processes, potentially causing consequences to ripple outward from the incident itself. If the incident affects key facilities, indirect effects could be magnified and also include businesses that are unable to operate both in the port and elsewhere if they are dependent on goods that move through the port. There is also the potential for unemployment of indirectly affected businesses. The movement of gasoline prices after the Exxon Valdez spill is an illustration. Although the actual disruption in supply was relatively small, the oil spill sent shock waves through oil markets, particularly those most dependent on oil from the Alaskan North Slope along the West Coast. In the first week after the oil spill, spot market prices of unleaded regular gasoline increased $0.50 from $0.68 per gallon to $1.18 per gallon, a 74 percent increase due to fears of an extended closure of oil from the Alaskan North Slope. In the following weeks, however, prices began to decrease, hitting $0.99 on April 7 (2 weeks after the spill) and $0.82 on April 14 (3 weeks after the spill). Thus as markets realized that the supply shortage would be short lived, prices dropped sharply. The Department of Energy concluded in its analysis of the incident that the temporary loss of Alaskan North Slope supplies resulted in a perception of tight oil markets rather than a significant change in fundamental supply and demand factors. Many efforts are under way, both internationally and domestically, to protect energy commodity tankers and their attendant facilities, but significant challenges to the success of these efforts may limit the effectiveness of these actions. These challenges are evident in protecting the loading and transit of tanker shipments. In these settings, a broad range of international stakeholders is involved, including IMO, foreign governments, vessel and facility operators, and U.S. government agencies. To help protect the international maritime supply chain, signatory governments are responsible for implementing the requirements of IMO’s ISPS Code into law, many facility and vessel operators have taken steps to implement ISPS Code requirements, various industry organizations have reported security conditions in ports around the world to better inform their members, and the U.S. Coast Guard and Navy have also established their presence overseas. Challenges are evident, however, when examining how this framework has been implemented to date. Our limited reviews at foreign facilities showed wide disparity in the quality and extent of security. The Coast Guard is limited in the degree to which it can bring about improvements abroad when security is substandard, in part because its activities are limited by conditions set by host nations. The Navy takes actions that help to prevent attacks on tankers in transit, but is limited in the areas where it can patrol. In U.S. ports and waterways, a wide array of stakeholders is taking steps to protect arriving vessels, but challenges persist here as well. Key participants include the Coast Guard, CBP, and local law enforcement agencies. In some locations, however, the Coast Guard has had difficulty meeting its own self-imposed requirements for security activity. The completion of new LNG facilities planned for a number of ports could further exacerbate the Coast Guard’s ability to meet current requirements with its current resources. The ISPS Code lays out the international regime for securing port facilities and commercial vessels. Signatory governments of port and flag states are responsible for ensuring compliance with the ISPS Code at port facilities and vessels under their jurisdiction. Port states enter the compliance status of their facilities directly into an IMO database. While the ISPS Code was adopted under the auspices of IMO, IMO officials told us they have no way of knowing if a country’s port facilities are truly in compliance. IMO merely reports information submitted by member governments and does not verify its accuracy. Additionally, there is no other internationally recognized mechanism for third party review to verify actual compliance at port facilities. Without third party compliance review, it is extremely difficult to determine if ports are secure against terrorism. Within some countries, the actual security measures can vary greatly from port facility to port facility, as indicated both by our own visits to foreign facilities and our discussions with agency and shipping officials. For example, In one country we visited, we observed varying degrees of implementation of measures to control access at different port facilities. One facility we visited had security cameras, fences, guards checking perimeter security, and identification checks for access control. Here, we were challenged by guards regularly as we passed through gates, even though facility officials were escorting us. At another facility, however, someone came to the guard station only when our escort signaled for him to come over, and fences were collapsed in some places and had holes in others. Vessel operators we met with also described differences in security at different ports where they load. These operators said they use many sources of intelligence to determine their security stance when entering a port. Some operators said they can call on the knowledge of their own intelligence sources in port states, including contacts with intelligence agencies. Members of Intertanko, an international industry organization, can access its database of port security conditions, a database made up of reports from vessel operators that experience these conditions when they stop at various ports. In this database, operators reported that some ports security conditions are substantially worse than would be expected for an ISPS Code- compliant facility. In such cases, they reported taking steps that went beyond ISPS requirements, such as keeping ships at security postures beyond those called for by the port state’s declared security level. The United States is attempting to deal with facility security lapses and inconsistent security conditions in some overseas ports with overseas efforts of its own. Because of congressional concern over the effectiveness of antiterrorism measures in place at foreign ports, the Coast Guard has implemented the International Port Security Program, which was designed in part to assess and help improve the security at foreign ports. This program reviews port states’ implementation of port facility security measures using established security standards, particularly the ISPS Code. According to the Coast Guard, the ISPS Code is the benchmark against which the effectiveness of a country’s anti-terrorism measures will be assessed. The program also reviews the country’s implementation of ship security provisions of the ISPS Code to help decide what actions to take in reviewing that country’s vessels when they call in U.S. ports. Visits are conducted by Coast Guard personnel operating out of the Netherlands, Japan, Singapore, and the United States. According to program guidance, the Coast Guard officers making these visits are to exchange information with officials of the host country, visit port facilities, and share best practices. The Coast Guard faces a number of challenges, however, in operating this program. The locations to be visited are negotiated with the host country; thus the Coast Guard team making the visit could be precluded from seeing locations that were not in compliance. Coast Guard officials said International Port Security Program officers typically make up to three visits to a country, each lasting about a week. Their assessments are thus based on conditions observed when their visits occur. We are currently conducting a separate review of the Coast Guard’s international programs, and the report we issue will include a more complete review of the effectiveness of its International Port Security Program. In certain locations, the Navy and Coast Guard have also taken more direct action to protect oil terminals—most notably in Iraq. The Navy has set security zones (zones where unauthorized vessels will be fired upon) around Iraqi oil terminals and stationed warships and patrol boats around the terminals (see fig. 8). The Navy has also stationed security personnel on the terminal platforms. An additional protective measure taken overseas is the effort of State Department (State) officials to help ensure that terrorists cannot gain entry to the United States by working as seafarers on tankers or other vessels. State Department regulations eliminated crew list visas and required all crew members seeking to enter the United States to apply for individual crew visas. These visas are usually presented at U.S. ports of entry, but they can only be obtained abroad. Applicants must make appointments with State Department officials located at embassies and consulates and be interviewed. They must submit background information, fingerprints, and sufficient documentation to show they are employed by a shipping company. This information is then checked against a State Department database that contains records provided by numerous agencies and includes information on persons with visa refusals, immigration violations, criminal histories, and terrorism concerns. We reported in September 2005 steps State has taken since September 11, 2001, to improve the visa process as an antiterrorism tool as well as some of the additional actions that we believed State could take to further strengthen the process. According to the State Department, it has corrective actions under way that it believes will address the recommendations. Many countries help to protect energy commodity tankers by patrolling the sea transit routes. For example, Combined Task Force 150, which as of December 2006 included navies of the United States, Canada, France, Germany, Italy, Pakistan, and the United Kingdom, conducted operations in the Arabian Sea, Gulf of Oman, Gulf of Aden, Indian Ocean, and Red Sea to secure the waterways and prevent piracy and terrorism (see fig. 9). Naval and coast guard forces of Indonesia, Malaysia, and Singapore patrol the Strait of Malacca, a major choke point in the shipment of energy commodities. Improvements in security in the strait led to its removal from a list of areas in which Lloyds vessel insurers could raise premiums due to severe security risks. To protect their ships in areas of known danger, tanker operators said they are also modifying their normal practices. For example, tanker operators told us that they have directed their vessels to travel much further off the shore of Somalia than they would ordinarily. Near Somalia, the International Maritime Bureau recommended in 2005 that commercial vessels stay 200 miles away from the coast, and the U.S. Maritime Administration and Coast Guard issued similar guidance for U.S.-flagged vessels. In piracy-prone waters, such as the Strait of Malacca, actions include sailing with all lights on, using extra lookouts, and equipping crews with fire hoses to prevent or repel boarders. While these actions have had some success in securing transit routes, the vast areas to be patrolled and the small number of ships available present the military forces of the world with great challenges in protecting the sea lanes. For example, a multinational task force of military vessels that patrols the Arabian Sea, Gulf of Oman, Gulf of Aden, and northwestern Indian Ocean is made up of about 15 ships. The navies of regional countries also patrol near their shores, but in areas such as the Horn of Africa this multinational task force is the only major presence. Because tankers travel so frequently and so few naval ships are available to be on station, naval protection cannot be offered for all those who travel in these waters. Besides patrolling the waters, tracking the movement of tankers is another way to monitor them. A recently passed IMO requirement calls for most commercial vessels, including tankers, to begin transmitting identification and location information on or before December 31, 2008, to SOLAS contracting governments under certain specified circumstances. This will allow the vessels to be tracked over the course of their voyages. Under this requirement, information on the ship’s identity, location, date, and time of the position will be made available to the ship’s flag state, the ship’s destination port state, and any coastal state within 1,000 miles of the ship’s route. For ships approaching the United States, an extensive tracking program is already in place. The Coast Guard currently tracks ships as they approach the U.S. coastline and is developing programs for longer- range tracking. Domestically, many agencies and other stakeholders have taken steps to develop and implement plans for helping ensure the security of maritime energy commodity shipments. The Coast Guard’s primary challenge is utilizing its limited resources to meet its security workload. Since the terrorist attacks of September 11, 2001, Coast Guard field units have seen a substantial increase in their security workload. Coast Guard field units at some ports have not always been able to meet their maritime security activity requirements. Moreover, the Coast Guard’s resource demands are expected to grow as more facilities for importing LNG come on line, increasing the number of shipments requiring Coast Guard protection. The efforts to provide security over energy commodity shipments arriving at U.S. waterways and port facilities involve a wide range of federal and local agencies as well as owners and operators of the facilities that receive the shipments. Much of the framework for port security is contained in MTSA. DHS, which is the main agency responsible for homeland security responsibilities contained in MTSA, has assigned most of the responsibilities to the Coast Guard. To carry out this responsibility, as well as the nation’s port state oversight of foreign-flagged vessels, the Coast Guard’s efforts range from boarding ships and escorting those shipments of greatest concern to patrolling port waters and overseeing the security actions undertaken by vessel and facility operators. CBP has the lead role in ensuring that only authorized persons onboard tankers come ashore when calling on U.S. ports and that no contraband is smuggled into the United States using the tankers. MTSA requires regular vulnerability assessments of port facilities, and facility owners and operators are required to develop and update regularly a plan for meeting basic security requirements. Facility security plans and updates to them are to be reviewed and approved by DHS. Particularly for the Coast Guard, the security activities vary greatly depending on the type of energy commodity being carried by tankers. Two energy commodities, LNG and LPG, are on the list of what the Coast Guard has traditionally called Certain Dangerous Cargo (CDC). Coast Guard guidance requires its field units to take certain actions to protect LNG and LPG tankers in key port areas, which include high-population areas or areas with critical infrastructure, such as bridges or refineries. Beyond protecting LNG and LPG shipments in these key port areas, Coast Guard field units are required to implement security activities commensurate with the extent of critical infrastructure, extent of high- profile vessel traffic transiting through key port areas, and availability of support of non-Coast Guard entities, such as state and local law enforcement agencies. According to senior Coast Guard field officials with LNG security responsibilities, LNG tanker transits have received the greatest attention of the two, due in large part to the much greater size of LNG tankers, the amount of hazardous cargo they are carrying, and the public perception of the danger of LNG shipments. Many of these security measures are now being implemented at existing LNG ports around the country. The security measures address two phases of LNG operations, including (1) the transit of an underway tanker through a port and (2) the period when a tanker is moored at a receiving terminal. Coast Guard security activity requirements are less stringent for oil tankers or tankers carrying many other petroleum-based products, such as gasoline or crude oil, because they are not identified in the CDC list of hazardous marine cargo as posing the greatest human safety risks. However, field units do have discretion to take additional actions to protect oil tankers and associated waterside loading facilities that are determined to pose security concerns. At many ports we visited or contacted, Coast Guard field units are receiving assistance from state and local law enforcement agencies for help in conducting port security operations. These partnerships with state and local law enforcement agencies have been encouraged by Coast Guard headquarters. Coast Guard officials said the support has been particularly valuable in protecting LNG carriers. For example, field units at two of the four ports with onshore LNG importing facilities reported using regular escort support from state or local law enforcement agencies. In addition to state and local law enforcement agencies, facility operators play a significant role in protecting against terrorist threats. For those key energy ports we visited, the Coast Guard reported that the waterfront energy facilities in those ports were taking actions to comply with the requirements the Coast Guard established pursuant to MTSA. Of the 19 domestic waterside petroleum facilities we visited, all were reported by the Coast Guard to be in compliance with MTSA regulations. Examples of steps taken include key-card access systems, closed-circuit television cameras and sensors along fencing, hardened perimeter fencing, and reinforced gates at most access control points. Facility operators told us they conduct regular security drills involving emergency and terrorism scenarios and they regularly share pertinent security information with other participants of the Area Maritime Security Committees. In some cases we observed steps that go beyond MTSA requirements, such as using radio frequency identification cards that can track the location of all persons on facility property. Coast Guard records show that its field units in several of the energy- related ports we reviewed have been unable to accomplish many of the port security responsibilities called for in Coast Guard guidance. According to the data we obtained and our discussions with field unit officials, resource shortfalls were the primary reasons for not meeting these responsibilities. We have noted in earlier work that the Coast Guard is ahead of many agencies in the degree to which it has developed a sound framework for managing its workload on the basis of risk. When carried out effectively, risk management offers a way to make informed decisions about how best to use limited resources. In the Coast Guard’s case, its actions involve a balancing act both in deciding how best to meet its various security and nonsecurity missions agencywide, but also in weighing the pros and cons of investing additional resources in energy commodity tanker protection versus the wider range of other port activities that require protection. The Coast Guard uses the requirements laid out in its guidance to establish a port-specific security approach in which the workload varies based on such factors as the proximity of population centers to the port area, the extent of critical infrastructure at the port, the extent of high-profile vessel traffic transiting through key port areas, and the availability of support from other entities. Given that the resource levels of some field units have limited their ability to achieve Coast Guard security standards, the Coast Guard has attempted to realign its security requirements to more closely match available resource levels. Coast Guard headquarters officials meet on an annual basis to review new risk assessments and current Coast Guard capacity to mitigate risk. The Coast Guard also receives recommendations from field unit commanders for introducing tactical efficiencies into security requirements. Over the past several years, the Coast Guard has revised its operational security guidance in two main ways: Revising the standards for the amount of activity required for conducting some security activities. In August 2006 the Coast Guard substantially reduced the types of CDC-carrying vessels that must be escorted. The Coast Guard developed a subset list of the CDC commodities—called Especially Hazardous Cargo—it determined as posing the greatest safety and security risks. This list included both LNG and LPG, meaning that the activities required to protect them remain unchanged. However, for CDC commodities not included on the Especially Hazardous Cargo list, such as vinyl chloride, escort requirements were eliminated during normal threat conditions— MARSEC I. In all, requirements were reduced for about 20 different CDC commodities carried in bulk. The August 2006 list of Especially Hazardous Cargo consisted of seven hazardous liquid gas or liquid commodities: acrylonitrile, ammonium nitrate, ammonium nitrate/fuel oil, anhydrous ammonia, chlorine, LNG, and LPG. Providing greater operational flexibility for Area Commanders when resource constraints may limit the ability to meet requirements. The Coast Guard has introduced new tactical options that Area Commanders may utilize, in some cases, to accomplish resource intensive security activities. The Coast Guard’s methodology used to develop the Especially Hazardous Cargo has two substantial shortcomings, however. Our specific concerns are as follows: Lack of thoroughness. To identify the highest risk CDC commodities, senior Coast Guard headquarters officials told us they reviewed available consequence analysis assessments that had been conducted by the Coast Guard’s Special Technical Assessment Program and also reviewed a 2004 consequence analysis of LNG by Sandia National Laboratories. They said they also incorporated the views of persons with expertise in CDC commodities, including Coast Guard field officials. However, the Coast Guard did not perform consequence assessments on many CDC commodities by the time it created the Especially Hazardous Cargo list, and as of January 1, 2007, it still had not done so. No systematic comparative analysis was conducted to identify and prioritize the highest-consequence commodities. Coast Guard headquarters officials acknowledged they did not conduct a relative risk assessment of the CDC commodities. Rather, officials told us they relied on the collective best judgment of Coast Guard experts from field units and headquarters that had significant experience dealing with various transportable energy and chemical commodities. By conducting a relative risk analysis of all CDC commodities, the Coast Guard would have had available more definitive input for determining which CDC vessels posed the greatest risks necessitating additional mitigation measures, which in this case would be an escort. The Coast Guard is taking action to address the methodological limitations we note. Shortly after the Coast Guard released the Especially Hazardous Cargo list, we shared our concerns with Coast Guard officials. The Coast Guard has since begun efforts to broaden its studies of potential consequences to include a wide range of hazardous commodities. It contracted with the American Bureau of Shipping to perform a comparative analysis of the consequences of an attack on vessels carrying all commodities on the CDC list, including LNG and LPG. The product of this analysis is to be a ranking of the relative consequences of each of the CDC commodities. This study is scheduled to be completed in spring 2007. Coast Guard headquarters officials told us that following this analysis, and subject to available funding and other considerations, they may consider adding other commodities to the comparative analysis, such as gasoline and jet fuel. Going beyond the consequence analyses of hazardous commodities, the Coast Guard has also developed a tool to compare the overall relative risk scores of different terrorist attacks at the nation’s ports. Field units are developing risk scenarios for potential targets at their ports and possible attack types that could be used against those targets. Using the Maritime Security Risk Assessment Model, the units are to analyze the different risk scenarios in relation to three key elements of risk: reported threat of different types of attack, vulnerability of the targets (incorporating different protective actions taken by security stakeholders), and consequences of a successful attack (including human health, economic, and environmental). Each risk scenario is to receive a score. These risk scores are to be comparable within and between ports so that they can be used in risk management decisions both locally and nationally. In the longer term, plans for adding additional LNG facilities may require the Coast Guard to reassess its workload yet again. Currently the Coast Guard is faced with providing security for vessels arriving at four domestic onshore LNG import facilities, but the number of LNG tankers bringing shipments to these facilities will increase considerably because of expansions that are planned or under way. In addition, industry analysts expect approximately 12 more LNG facilities will be built over the next decade (see fig. 12). Consequently, Coast Guard field units will likely be required to significantly expand their security workloads to conduct new LNG security missions. Recognizing this coming increase in demand on security resources at LNG ports, Coast Guard field units have been planning strategies to help meet this demand. We found evidence that, in their planning efforts, Coast Guard field units and affected locations are seeking assistance from a wide range of stakeholders and sources. In particular, stakeholders mentioned the following: Manpower from state and local law enforcement. Several field units plan to rely on state and local agencies to conduct a considerable share of the new LNG workloads. While state and local law enforcement agencies have generally agreed to participate in LNG security operations, such support was largely contingent upon their receiving funding to cover their own resource gaps. According to the Coast Guard, at some ports, law enforcement agencies required funding to cover new capital investments, such as additional patrol boats, as well as operational costs such as funding for additional manpower or fuel for the new boats. Financial help from facility operators. At some of the proposed LNG ports we reviewed, facility operators were also planning to contribute considerable financial resources to help fund new LNG security operations. In doing so, these companies planned to fund both operational and capital enhancement costs for state and local law enforcement agencies that had agreed in concept to support Coast Guard LNG security missions. At two ports where the Coast Guard had approved security arrangements for new LNG facilities, state and local law enforcement agencies had already developed, or were planning to develop, a cost-sharing agreement with the facilities. For example, at one port, a potential LNG facility operator made a commitment to fund most of the capital enhancements and operational costs of the state and local law enforcement agencies involved, including two patrol boats for state agencies, two tugboats, and communications equipment. Facility operators told us they were motivated to provide resources because they understood that doing so was essential to ensuring final approval of the LNG facilities. Some facility operators also told us that the Energy Policy Act of 2005 required them to develop resource cost- sharing agreements to offset state and local government resources used specifically for the new LNG facilities. Financial help through federal grants. State and local law enforcement agencies also reported that they were relying, in part, on federal grants to obtain additional resources. Of the 15 state and local law enforcement agencies we contacted, 9 agencies reported applying for Port Security Grants or Urban Area Security Initiative grants. Law enforcement agency officials told us they planned to fund capital enhancements with this grant funding. Among those items officials planned to fund with their grants were new patrol boats, construction of a new boathouse and piers, helicopters, and security cameras to be placed along an LNG transit route. While port security grants and resource sharing agreements are expected to address at least part of the resource needs of the Coast Guard’s law enforcement partners, the Coast Guard is likely to require additional resources to fulfill its own new security responsibilities. To date, however, field units have made little progress in obtaining additional resources. Additionally, because federal law prohibits the Coast Guard from receiving resources for its own use from private sector companies, the Coast Guard cannot use resource-sharing partnerships to help fill its own resource needs. Consequently, Coast Guard headquarters officials told us they recognize that despite the efforts of Captains of the Port to develop local solutions to new security demands, some field units will continue to lack the resources necessary to meet their increasing LNG security workloads. Coast Guard headquarters officials told us they were considering two general options to provide field units with the necessary resources to carry out their new LNG security workloads. These two options are as follows: Redistribute resources to units with new LNG activity. Coast Guard officials told us they are considering shifting resources from ports with surplus resources to ports with new or expanded LNG facilities. Coast Guard headquarters officials told us, however, that they have not yet determined which ports would, or even could, provide these excess resources. Coast Guard’s Atlantic area—where most of the new LNG activity is expected—has ordered districts and field units to report any excess resource capacity. Guided by risk management, Coast Guard headquarters may redistribute any available excess capacity to ports with new LNG security workloads. The earliest that the Coast Guard could reprogram assets from within the Atlantic Area is fiscal year 2009. Request new resources via budget proposals. Coast Guard officials also reported that they may request additional funding through the annual budget process to support the acquisition of additional boats and personnel to conduct vessel escorts and infrastructure patrols and the training of additional personnel. As of January 1, 2007, Coast Guard headquarters officials told us they had not yet developed a plan—or blueprint—for how to proceed with these two options for addressing new LNG security resource demands. The decisions about how to proceed may involve difficult choices, because shifting resources to this growing need could involve trimming resources now tasked to other homeland security duties or traditional non-homeland security missions, and because seeking more resources involves asking Coast Guard decision makers to weigh important, but competing, priorities. A national plan that identifies the Coast Guard’s nationwide LNG resource needs and identifies milestones and funding needs for meeting those needs can help the Coast Guard manage its limited resources and communicate resource needs to Congress. It is important to complete this plan and address in it key elements and issues so that it is both comprehensive and useful to decision makers who must make difficult policy and budget choices. To mitigate the consequences of a terrorist attack on a tanker carrying energy commodities, the United States has multiple plans that address actions to be taken at the national, port, facility, and vessel levels. To translate these plans into effective response actions, stakeholders could face at least three main challenges. First, if an attack were to occur, the stakeholders would need to integrate current, separate plans for the two types of responses necessary for mitigating the consequences of an attack—spill and terrorism responses. Second, port-level plans to mitigate the potentially substantial economic consequences of an attack, such as plans that set priorities for the movement of vessels after a port reopens, could be useful. Third, stakeholders may need to obtain resources to ensure that they can carry out the plans. At the port level, this challenge may extend to response equipment, training, and communications equipment. To date, federal grants for port security have been directed mostly to prevention rather than response, but now DHS is moving toward a more comprehensive risk-based decision-making process for allocating grant funds. At the time of our review, DHS did not have performance measures for determining how to allocate resources to ensure ports can effectively respond to an energy commodities spill caused by terrorism. The planning framework for responding to spills and terrorism incidents is extensive, involving multiple federal plans and memorandums of understanding, port-specific plans, as well as plans for individual facilities and vessels. As figure 13 shows, at the national level these plans are carried out under the general framework of the National Response Plan (NRP) but are developed into two separate lines of effort—one for spill response, the other for terrorism response. The NRP designates the Coast Guard as the primary agency for spill response on water and the FBI as the primary agency for terrorism response, and it calls on the two agencies to coordinate their responses if the terrorist attack involves energy commodities. For this type of incident, FBI officials stated, crime scene investigation and preservation would take place at the same time as the environmental response activities that would be initiated to contain the likely spill. In this situation, the NRP notes that spill responders will provide assistance, investigative support, and intelligence analysis for oil and hazardous materials response in coordination with the law enforcement and criminal investigation activities of the FBI. As the figure shows, beneath the NRP, spill responses are coordinated by the National Oil and Hazardous Substances Pollution Contingency Plan (NCP), while terrorism responses are coordinated by the Terrorism Incident Law Enforcement and Investigation Annex. Also at the federal level, various other federal plans and agreements, such as the National Incident Management System (NIMS), the Marine Operational Threat Response Plan (MOTR), and interagency memorandums of agreement also help guide the response. The spill and terrorism responses continue into port-level planning, where the key guidance for spill responses is found in a port’s Area Contingency Plan (ACP) and the key guidance for terrorism responses is found in the port’s Area Maritime Security Plan (AMSP). Table 3 provides a brief description of the various plans and agreements found in figure 13. At the federal level, in addition to the plans and agreements governing spill and terrorism responses in table 3, other guidance and requirements related to economic recovery include the following: The Maritime Infrastructure Recovery Plan (MIRP)—a supporting plan for the National Strategy for Maritime Security—contains procedures for managing the economic consequences and recovery of maritime infrastructure after a transportation security incident, such as a terrorist attack. The MIRP provides strategic-level guidance for national, regional, and local decisionmakers to set priorities for restoring the flow of domestic cargo. The plan recommends that the Captain of the Port consider key shipping channels and waterways for homeland security; military traffic; and commercial operations; key landside transportation infrastructure, such as tunnels and bridges; and other infrastructure key to maintaining continuity of operations in the port. The SAFE Port Act of 2006 requires the Secretary of Homeland Security to develop protocols for the resumption of trade after a transportation security incident, such as a terrorist attack. The protocols must include a plan to redeploy resources and personnel as necessary to reestablish the flow of trade, and appropriate factors for establishing prioritization of vessels and cargo that are critical for response and recovery, including factors related to public health, national security, and economic need. At the port level, under the Oil Pollution Act of 1990 and the Maritime Transportation Security Act of 2002, the Captain of the Port is to establish both spill and terrorism response plans. In doing so, the Captain of the Port must identify local public and private port stakeholders who will develop and revise separate plans for marine spills of oil and hazardous materials (ACP) and for terrorism response (AMSP). Both plans call for coordinated implementation with other plans, such as the response and security plans developed by specific facilities or vessels. Local stakeholders are organized into two separate groups: an area committee for spill response (Area Committee), which develops the ACP, and an area committee for terrorism response (Area Maritime Security Committee), which develops the AMSP—both committees are chaired by the Captain of the Port. Some stakeholders, such as port authorities, fire departments, and facilities in the port, may be part of both committees, while others may be part of only one committee. For example, oil spill response organizations are likely to be involved only with spill response planning. If an energy commodity tanker was attacked while moving through a U.S. port or while docked, a range of response activities would need to occur to address the consequences. Figure 14 illustrates how incident response would potentially take place following an attack and a subsequent spill. As figure 14 shows, incident response includes three separate but overlapping activities, as reported by port stakeholders: Initial incident response for public safety and establishment of the incident command site. Because energy commodity tankers carry flammable and/or hazardous materials, the first responders are likely to be area fire and police departments; receiving facility personnel may also respond. The first concern is always public safety, and therefore the fire department would begin rescuing victims and addressing the probable fire. Law enforcement agencies would secure the perimeter of the scene to prevent potential follow-on attacks as well as to prevent the public from moving too close to the attack location—both to protect the public and to maintain the crime scene for subsequent investigation. Initial responders would also establish a multi-agency incident command site near the location of the vessel, where all responding agencies with jurisdictional responsibilities for spill and terrorism response would congregate to manage the operations. Crime scene preservation and investigation, and initial spill response activities. As public safety operations continue, law enforcement agencies would determine whether terrorism had caused the spill, and if so, would conduct an investigation at the same time that life safety operations are continuing and spill response operations are beginning. Investigations would involve crime scene and perimeter control, determining if additional devices may be present and disposing of them, and apprehending suspects. Spill operations would initially involve the laying of a containment boom to protect the surrounding environment from contamination caused by the spill. Law enforcement and spill response organizations will need to coordinate their activities because actions to mitigate environmental consequences can potentially damage crime scene evidence. Spill and port recovery activities. Once the resulting spill is contained, incident commanders would determine their next steps, depending on conditions. Spill recovery may include intentionally burning contained oil, allowing the commodity to evaporate, using chemicals to disperse the spill, or using mechanical recovery to skim the oil out of the water. If a terrorist attack had occurred, the crime scene investigation would have to be conducted before the port could be fully restored for cargo and passenger ships. According to FBI officials, the FBI would work with the Coast Guard to get access to the incident site as soon as possible to obtain all crime scene evidence possible, without interfering with the response. These complex activities would be carried out by many different federal, state, and local agencies. Figure 15 illustrates one possible scenario for spill and terrorism response actions and shows some of the agencies that might carry out these actions. In the event of a terrorist attack on an energy commodity tanker, federal agencies and port communities could face challenges in integrating their spill and terrorism response plans. Ports could face two additional challenges: planning for economic response activities and obtaining the necessary resources to respond to a terrorist attack on an energy commodity tanker. As we have noted in prior reports, a fundamental goal of emergency preparation and response is the ability to respond to emergency incidents of any size or cause with well-planned, well-coordinated, and effective efforts that reduce the loss of life and property and set the stage for recovery. In our September 2006 report on the preparation for and response to Hurricane Katrina, we stated that fundamental to effective preparation and response are (1) clearly defined, clearly communicated, and clearly understood legal authorities, responsibilities, and roles at the federal, state, and local level, and (2) identification and development of the capabilities needed to mount a well-coordinated, effective response to reduce the loss of life and property and set the stage for recovery. Providing these fundamentals requires effective planning and coordination, including detailed operational plans, and robust training and exercises in which needed capabilities are realistically tested, assessed, and problems identified and addressed. With regard to potential attacks on energy commodity tankers in U.S. ports, the ports could face challenges if roles and responsibilities have not been clearly defined, communicated, and understood and if needed capabilities have not been fully identified and appropriately tested. The National Preparedness Goal uses 15 scenarios to identify 37 capabilities and the associated critical tasks needed to respond to incidents of national significance—those that go beyond the state and local levels and require a coordinated federal response. However, the scenarios used to identify these capabilities do not specifically encompass the capabilities needed for responding to attacks on oil, gas, or other tankers in American ports. The NRP calls upon the Coast Guard and the FBI to coordinate their response in the event of a terrorist attack on an oil or hazardous materials tanker. However, the agencies cannot be assured that their joint response, concurrently implementing the numerous existing plans, will be effective unless they have developed a detailed operational plan that integrates their spill and terrorism responses and have tested these responses in joint exercises. According to headquarters and field office Coast Guard and FBI officials, coordination would be managed through the use of the unified command structure in the National Incident Management System and the other general coordination mechanisms in the NRP and the MOTR. However, the unified command structure and the NRP are generally not specific in explaining how they will be made operational following an attack. As we have recently reported, the implementation of the NRP following Hurricane Katrina identified concerns with coordination within and between federal government entities using the plan. We recommended the development of detailed operational plans for the NRP and its annexes. In addition to having operational plans, agencies should conduct joint exercises that simulate an attack and the agencies’ responses. Without such exercises, it would be questionable whether joint Coast Guard and FBI activities would proceed as planned. Simulation exercises help determine the strengths and weaknesses of various plans and the ability of multiple agencies or communities to respond to an emergency incident. According to DHS’s Homeland Security Exercise and Evaluation Program, well-designed and executed exercises are the most effective means of (1) testing and validating policies, plans, procedures, training, equipment, and interagency agreements; (2) clarifying and training personnel in roles and responsibilities; (3) improving interagency coordination and communications; (4) identifying gaps in resources; (5) improving individual performance; and (6) identifying opportunities for improvement. The value of joint simulation exercises in uncovering problems has been demonstrated in the results of the largest national, state, and local interagency terrorism response exercise ever conducted. This exercise— called TOPOFF 3—was conducted in April 2005 and included explosions and hazardous materials releases in multiple locations around the nation (none of which were on the water). According to the Coast Guard after- action report for one of the sites, the FBI (1) never fully integrated into and accepted the unified command called for under NIMS, (2) did not appropriately staff the incident command post with its representatives, (3) maintained distinctions between hazardous materials release response and terrorism investigation actions, and (4) kept management of the investigation separate from the incident management overseen by the unified command. According to the after-action report, “concurrent management of both the investigation and all other response functions would have increased the effectiveness and efficiency of the response effort.” The report also recommended the continuation of multiagency training and exercises to test interagency coordination efforts. The need for joint spill and terrorism response exercises has been discussed, but exercises have not been conducted, at the national level. Specifically, planning discussions for the 2004 Spill of National Significance (SONS) exercise identified the need to clarify how the FBI fits into spill response activities when the possibility of terrorism is present, but the exercise did not test integrating the FBI’s and other agencies’ response. However, both Coast Guard guidance and the Department of Justice’s Inspector General have supported the need to combine spill and terrorism response exercises. Specifically: Coast Guard guidance recommends combining terrorism response exercises with other exercises, such as spill response. OPA 90 and MTSA implementing regulations require similar schedules for exercises of spill and terrorism response plans, and the integration of these exercises could improve response performance and complete required multiple response exercise mandates at one time, according to Coast Guard officials. The Department of Justice’s Inspector General in 2006 called for more joint exercises between the Coast Guard and the FBI in high-risk ports to, among other things, resolve potential role and incident command conflicts in the event of a maritime terrorism incident. The Inspector General’s report emphasized the interaction of Coast Guard and FBI security units, but these recommendations are equally applicable for integrated exercises to respond to a spill caused by a terrorist attack. Once public safety is addressed, the Coast Guard and FBI have different priorities for their jurisdictional responsibilities—spill containment and cleanup and crime scene preservation and investigation, respectively. At the time of our review, FBI officials told us they knew of no upcoming joint planned exercises. FBI headquarters officials have not issued guidance to field office agents on integrating spill and terrorism responses activities within a single exercise. Coast Guard officials told us that the MOTR is intended to delineate Coast Guard and FBI roles in responding to an attack. FBI headquarters officials told us that their participation in several MOTR conference calls demonstrated that coordination among MOTR agencies is effective. These telephone discussions may improve overall coordination, but exercises for joint spill and terrorism responses should be conducted as often as appropriate. At the port level, effectively integrating spill and terrorism emergency responses requires all plans to operate in unison—the port spill response plan (ACP) and the port terrorism response plan (AMSP), as well as facility and vessel response plans. As figure 13 shows, there is no direct operational link between the ACP and the AMSP. Without a direct link, spill responders may not have the information they need to respond to a spill caused by a terrorist attack. While the AMSP has served as the terrorism response plan for ports since July 2004, it contains sensitive security information and is therefore only available to those individuals who are considered to have a “need to know.” As a result, nonsecurity personnel, such oil spill cleanup responders, may not have access to these plans during an emergency. For example, only 3 of the 13 ports we visited had ACPs that addressed terrorism response within the spill plan by incorporating terrorism incident annexes or other plans. Consequently, the ACPs may need to have explicit sections for responding to terrorism. The general lack of integration in the plans carries over to the separate spill and terrorism response communities at the port level. As previously discussed, individual members on these committees may not know all the members of the other committee, but a terrorist attack on a tanker would require them to respond simultaneously. We identified only a few examples of joint committee meetings that enabled members to interact. For example, Coast Guard officials told us that, since September 11, 2001, the Captain of the Port at one location has facilitated meetings between spill response providers and local offices of emergency management and federal and local law enforcement agencies in order to improve response coordination among all entities. They stated that if the spill and terrorism response communities were formally joined, response integration and efficiency would improve. In addition, at another location, Coast Guard officials noted, the local area training and exercise workgroup contains members of both the spill and terrorism response committees in order to consolidate training and exercises. Finally, in an attempt to improve communication, the FBI established Maritime Liaison Agents (MLA) at the ports so that all stakeholders would know the local agent in the event of an incident. At some ports we visited the spill responders knew who the FBI agent was and at other ports they said they did not. USCG guidance states that local port operators, municipalities, and public safety agencies are expected to provide and maintain adequate disaster response capabilities in their ports, with capability requirements likely to vary from port to port depending on size, commodities received, environmental considerations, relation to population, etc. Recognizing the variability of capability requirements, the USCG has developed Critical Success Factors (CSF) for spill response that drive a “Best Possible Response”—that is, a set of general goals to achieve when conducting a comprehensive and effective response. Six particular CSF are to be considered when developing ACPs, including (1) no public or responder injuries, illness or deaths; (2) sensitive areas protected; (3) resource damage minimized; (4) infrastructure damage minimized; (5) economic impact minimized; and (6) highly coordinated law enforcement and emergency management operations. Joint exercises can maximize the ability of a given port to carry out a “best response” in the event of an attack on a tanker. However, we recognize that numerous scenarios could be exercised in any given port; consequently, joint spill and terrorism response exercises may not be the most urgent for a port that receives limited quantities of energy commodities. Figure 16 shows firefighters preparing for a potential marine response during a training exercise. Two developments—one a project at an individual port, the other a new requirement added by Congress—may help bring about more integrated responses. Specifically: At one port, we found a potential leading practice for integrating a marine terrorism response. The port’s Marine Terrorism Response (MTR) project was launched to develop and validate a multiagency response system and national model plan to help mobilize local, state, and federal resources for marine terrorism incidents. The MTR’s goals include increasing preparedness, identifying gaps in emergency response capabilities, and planning for timely restoration of trade. The project generated a response plan and a field guide for how to integrate responses for a range of issues, such as public safety, response coordination, recovery, and crime scene management. Stakeholders plan to incorporate existing response plans, such as the ACP, as annexes to the MTR. According to the FBI official involved with the MTR planning process, the MTR serves as an effective linkage between the spill and terrorism response sections of the National Response Plan. Under the SAFE Port Act of 2006, DHS must develop interagency operational centers by fall 2009 for port security at all high-priority ports. The Coast Guard and the FBI are among the agencies that will be represented at these operational centers, as will other public and private sector stakeholders who would be adversely affected by a terrorist attack. These centers may also include stakeholders who would be involved in a joint spill and terrorism response. Integration may be improved through the daily interaction of all these stakeholders. In April 2006 testimony before the House Homeland Security Committee, DHS’s Deputy Secretary stated that physically connecting the various agencies involved is important, and the Port of New York and New Jersey’s Manager of Port Security voiced support for the development of joint operation centers in key U.S. ports. The economic consequences of a terrorist attack on a tanker could be significant, particularly if one or more ports are closed. Currently, guidance in the Maritime Infrastructure Recovery Plan suggests that ports develop priorities for bringing vessels into port after a closure. Additionally, AMSPs must include a section on crisis management and recovery to ensure the continuity of port operations. At the time of our review, there was no national-level guidance for use by local ports. We identified some ports that, on their own initiative, were incorporating economic recovery considerations into their port-level plans, which could benefit other ports seeking to develop their own plans for mitigating the economic consequences of an attack. The SAFE Port Act requires the Secretary of Homeland Security to develop protocols for how maritime trade will be reestablished after a terrorist attack. These protocols must include appropriate factors—related to public health, national security, and economic need—that can be used to set priorities for vessels and cargo entering the port after a closure. While the act does not expressly require the development of port-level plans for facilitating the resumption of trade after an incident, DHS could consider developing guidance for ports to use to develop plans for mitigating economic consequences. Ports could face challenges in marshaling resources to improve port response capabilities, including obtaining or sharing needed marine firefighting equipment and training, other training, and interoperable communication systems that allow emergency responders to talk to each other to effectively coordinate their efforts. The ports we visited varied considerably in their ability to combat marine fires. Some ports had large fireboats that are designed to deal with fires on tankers, as well as firefighters trained to conduct shipboard firefighting operations. In contrast, other energy commodity ports relied on land- based firefighting companies; these companies told us that they did not have the training and/or the equipment to fight marine fires. See figure 17 for two examples of marine firefighting response. While some local ports may not be well equipped to handle marine fires, companies operating tankers are required to provide for marine firefighting and salvage capabilities under the Oil Pollution Act of 1990. However, we identified several limitations associated with these requirements: Timeliness of response not spelled out. OPA 90 does not specify how soon after an event either marine firefighting or salvage must occur. Under a Coast Guard rule proposed in 2002, and not yet issued as final, contracted marine firefighting resources generally would have to be provided within 8 hours after notification of an event, while salvage operations generally would have to begin within 16 hours. Even if this rule were in force, it might not be timely enough to prevent the vessel from sinking. Extent of planning for salvage varies widely. Salvage is important for marine firefighting because a ship may sink from an attack, may be deliberately sunk to control the resulting fire, or may be accidentally sunk by the firefighters because they are not familiar with ship stability issues inherent in the marine firefighting environment. In addition to the OPA 90 requirement, the SAFE Port Act of 2006 requires the development of salvage response plans to supplement Area Maritime Security Plans. While all ACPs for the ports we visited contain sections on salvage, we found that the plans varied widely in detailing salvage responses. A 2003 National Transportation Safety Board workshop identified potential shortfalls in local salvage planning and/or capabilities as an issue that needed to be addressed. One reason for capability shortfalls identified was that locally available salvage resources may sometimes be lacking. If ports lack marine firefighting or salvage capabilities, we identified the following other avenues for obtaining resources to enhance these capabilities. However, these avenues carry limitations, mainly related to the speed with which they could be deployed on site. Mutual aid agreements. Some port community members have mutual aid agreements in place to provide assistance in emergencies. These agreements can be industry-to-industry, municipal-to-municipal, industry-to-municipal, or municipal-to-industry. However, these agreements can have inherent delays in response time if needed resources are located some distance away or require considerable time for redeployment. For example, one refinery noted, in its site emergency manual section for ship fire procedures, that there is a need to evaluate whether refinery responders need to call the local fire department and request fireboat assistance because of a 45-minute delay in subsequent arrival of this resource. If the refinery needs to call for additional assistance from a nearby fire department’s fireboats, the delay could be several hours, according to state fire officials. National Oil Spill Response Resource Inventory. Each Coast Guard Captain of the Port has emergency contracting authority to obtain needed resources. The National Strike Force’s Response Resource Inventory lists public and private organizations that can provide these needed spill response resources. The Coast Guard is to review these organizations’ resources at least every 3 years to keep an up-to-date resource list. Again, in some cases delay in getting these needed resources to the incident location would occur. In addition to the differences in the availability of marine firefighting equipment, we found that access to marine firefighting training, which is highly specialized and different from land-based firefighting, can be limited because of distance from a training center or lack of resources. While a range of locations provide firefighter response training for energy commodity fires in the marine environment, these facilities are limited and are sometimes not located near a firefighting response organization that is seeking this training. Some local emergency responders told us they have not received shipboard firefighting training, which is even more specialized than general marine firefighting, and many of the responders we contacted identified the need for additional training. At one port we visited, fire department officials stated that the firefighters had not received this training but would board a burning vessel. See figure 18 for an example of firefighters training to combat an aviation fuel fire. We also found differences in training for federally established procedures outlining coordination—known as the incident command system (ICS)— for responding to any incident, including terrorism. Some emergency responders identified a lack of experience and training on this system as a potential concern for effectively coordinating and leading a response to an attack. The Coast Guard and fire departments are familiar with ICS because they were using it before September 11, 2001, but law enforcement does not have equivalent experience with it. At the ports we visited, the local Coast Guard and firefighting responders identified themselves as generally compliant with ICS training requirements. Although the FBI would have jurisdictional responsibility for leading the multiagency response to a terrorist attack on a tanker, FBI personnel did not have to comply with ICS training requirements until December 31, 2006. At the ports we visited, officials identified the lack of fully interoperable communications as an ongoing issue, as did many of the after-action reports we reviewed. Spill and terrorism responders may have difficulty coordinating their emergency response if their communications systems are not interoperable—that is, one agency’s equipment may not be able to communicate with another’s. For example, according to local emergency planners, during one port exercise in 2006 the responders used their cell phones because of interoperability problems. This workaround may be adequate during an exercise, an FBI official noted, but responders may not be able to rely on the cell phone communications network during an actual event. While interoperability is a problem for emergency responders throughout the nation, responders in the marine environment face additional challenges. These include the need for additional equipment on or near ships so that radio signals can get through to the ship’s hold, as well as marine band radios for operating on water. Response organizations have some options to work around the problem of interoperability. For example, the FBI can use a range of equipment to coordinate the signals of all the various responding agencies’ communications equipment, but it takes some time to make this equipment operational because the equipment has to be brought to the site, and each responding organization has to provide a radio to the same location for the workaround system to function. The Coast Guard also has communications equipment for interoperability stored in locations around the nation, but again, there would be a delay in getting this equipment to the site of an incident. For ports that may be facing resource shortfalls, finding ways to pay for improvements and enhancements is an issue. One potential funding source is DHS’s Port Security Grant Program. In the past, most DHS grants awarded to ports were for terrorism prevention and detection projects (such as fences, cameras, and security systems), rather than for response and recovery projects, according to DHS officials. For some states that contain ports we visited, officials who oversee grant resource distribution also told us that only a limited number of post-incident response project applications, such as marine firefighting assets or shipboard firefighter training, have received grant funding. This emphasis on prevention and detection is changing. Recent changes in the grant program are more likely to result in consideration of response and recovery projects, according to DHS officials. They told us that the DHS Port Security Grant Program is undergoing a fundamental shift from a facility security focus to a more comprehensive approach to managing risk within ports. The Office of Grants and Training, within the Preparedness Directorate, is working with the Coast Guard to develop an integrated, risk-based decision-making process for allocating grant funds for each port area. This shift in strategy recognizes that port security entails not only prevention and detection activities but also response and recovery capabilities. Plans for fiscal year 2007 grant guidance will place more emphasis on post-incident response projects, according to DHS officials. The SAFE Port Act of 2006 likewise emphasizes a risk-based approach for port security grants. To make effective judgments about such projects, performance measures are needed to quantitatively determine the spill and terrorism resources that should be available. Such measures help decide the extent to which a given resource is needed to effectively conduct a response within a given time period. At the time of our review, DHS was surveying available emergency response capabilities within a given port, according to officials from DHS’s Office of Infrastructure Protection. In September 2006, the New York City Fire Department Chief of Counterterrorism and Emergency Preparedness questioned whether the nation is prepared for an emergency and called for performance measures that emphasized (1) capability (What can we do?), (2) capacity (How much can we do?), (3) proficiency (How well can we perform?), and (4) deployment (How quickly can we deploy capabilities?). As we have previously reported, in the absence of comparable standards for emergency responder performance, it is difficult to assess whether grant resources will be directed effectively to reduce risk. Without such performance measures, the federal government would not be able to conduct an analysis, based on reducing overall risk, that could be used to set priorities for acquiring needed response resources. Performance measures are critical for setting priorities to effectively allocate federal funds. The Captain of the Port may assist local authorities in reviewing the adequacy of the port’s overall marine firefighting and salvage capability. Such qualitative reviews assess a range of factors related to the nature of operations within the port. However, these assessments cannot set priorities for addressing these shortfalls because they do not have quantitative performance measures that would provide a way to compare one shortfall against another to determine such priorities. Other related assessments face the same priority-setting issues. A recent qualitative advisory report for siting a potential future LNG facility illustrates this problem. The assessment identified the need to send firefighters to specialized fire schools on an annual basis to become trained in fighting LNG fires, as well as to provide local firefighters with additional training on hazardous materials and confined space rescue. The assessment also identified a range of equipment procurement needs, including additional fireboats capable of mitigating a large LNG spill on water as well as dry chemicals and foam caches for extinguishing any resulting fire. While all these shortfalls may need to be addressed, the assessments do not provide a road map for setting federal funding priorities. The ship-based supply chain for energy commodities remains threatened and vulnerable, and appropriate security throughout the chain is essential to ensure safe and efficient delivery. The threats are especially strong internationally, where the United States faces limitations in ensuring that facilities in foreign ports are meeting security standards and in protecting shipments in international waters. Domestically, the nexus for strengthening security efforts rests with the U.S. Coast Guard, which has primary responsibility for security actions in U.S. ports and waterways. Despite considerable efforts to protect ports and the energy traffic in them, the level of protection is not where the Coast Guard believes it should be. At some ports Coast Guard units are not meeting their own levels of required security activities. Growing demand for Coast Guard resources requires that the Coast Guard take action on several fronts. In adjusting security standards to take into account its limited resources, the Coast Guard needs to assure itself and other stakeholders that its adjustments are based on a careful assessment of risk. This process has begun with the Coast Guard’s ongoing assessment of risks associated with all CDC commodities, and since this assessment is already under way, we do not see a need to make a recommendation in this case. The results of that study, and of any comparative analysis that includes hazardous materials not on the CDC list, will be important in a careful and dispassionate analysis for ensuring that available resources are deployed in such a way that commodities receive protection commensurate with the relative risks involved. This is especially important with the expected growth in LNG imports. Similarly, we believe that the results of the risk analyses stemming from use of the Maritime Security Risk Assessment Model will be important in determining how field units can best make use of security resources at their ports. With the ability to compare different targets and different levels of protection offered by security stakeholders, the model should allow the Coast Guard to take a more complete accounting for the various risks at U.S. ports. These two efforts are vital inputs that are needed to ensure an accurate reflection of security risks to tankers and the ports that receive them. Local Coast Guard units have been active in preparing for the coming growth in LNG shipments, engaging with local law enforcement agencies as a means to augment Coast Guard resources. The assistance the Coast Guard already receives from state and local law enforcement is vital for many units as they try to meet security activity requirements with limited resources. Coast Guard headquarters, however, needs to do more to help these local efforts. More specifically, it needs to begin centralized planning for how to address resource shortfalls across many locations. As LNG facilities continue to multiply, the resulting increase in workload will affect some Coast Guard units but not others, necessitating a centralized response as well as a port-specific one. It is important for the Coast Guard to begin this centralized planning soon, when attention can also be paid to assessing the options for partnering with state or local law enforcement agencies to ensure appropriate security. This broader planning is important for ensuring a proper distribution of resources to best meet the Coast Guard’s diverse responsibilities. In the event of a successful attack on an energy commodity tanker, ports would need to provide an effective, integrated response to protect public safety and the environment, conduct a terrorism investigation, and restore operations in a timely manner. Consequently, clearly defined and understood roles and responsibilities for all stakeholders who would need to respond are needed to ensure an effective response. Operational plans for the response, among the various levels of government involved, should be explicitly linked. As we have reported previously, it is essential that these roles and responsibilities be clearly communicated and understood. Furthermore, while we recognize that ports may have exercise priorities other than responding to a terrorist attack on a tanker, we believe that combined spill and terrorism response exercises should be considered and pursued in ports that are considered to be at risk. In addition, national- level guidance has generally suggested that ports plan for mitigating the economic consequences of an attack. In implementing the post-incident recovery portions of the SAFE Port Act, DHS has an opportunity to provide specific guidance for how ports could plan for lessening potentially significant economic consequences, particularly if an attack results in a port closure. Finally, DHS has just begun to focus more on providing funding for response resources through the Port Security Grant program. However, DHS cannot be assured that it will appropriately target funding to the projects that most reduce overall risk because it has not developed quantitative performance measures. These measures would allow DHS to set priorities for funding on the basis of reducing overall risk. To make effective judgments about such projects, performance measures are needed to quantitatively determine the spill and terrorism resources that should be available. We recommend that the Secretary of Homeland Security direct the Commandant of the Coast Guard to take the following actions: Develop a national resource allocation plan that will balance the need to meet new LNG security responsibilities with other existing security responsibilities and other Coast Guard missions. This plan needs to encompass goals and objectives, timelines, impacts on other missions, roles of private sector operators, and use of existing state and local agency capacity. Develop national-level guidance that ports can use to plan for helping to mitigate economic consequences, particularly in the case of port closures. We also recommend that the Secretary of Homeland Security direct the Commandant of the Coast Guard and that the Attorney General direct the Director of the Federal Bureau of Investigation to work together to take the following two actions: At the national level, help ensure that a detailed operational plan has been developed that integrates the different spill and terrorism response sections of the National Response Plan. At the local level, help ensure that spill and terrorism response activities are integrated for the best possible response by maximizing the integration of spill and terrorism response planning and exercises at ports that receive energy commodities where attacks on tankers pose a significant threat. We recommend that the Secretary of Homeland Security work with federal, state, and local stakeholders to develop explicit performance measures for emergency response capabilities and use them in risk-based analyses to set priorities for acquiring needed response resources. We provided a draft of this report to the Departments of Defense, State, Justice, and Homeland Security, including the Coast Guard, for their review and comment. These departments provided formal written comments, except for the Department of State, which provided oral comments. The Department of Defense, in its written comments, concurred with our recommendations. The Departments of Justice, through the FBI, and Homeland Security generally concurred with our recommendations and provided specific comments on the recommendations that are detailed below. Regarding our recommendation that the Coast Guard develop a national resource allocation plan that takes into account new LNG security responsibilities along with its other mission demands, DHS generally concurred. It stated, however, that while it agrees with the need to address resource demands based on forecasted increases in LNG imports, it also stated that LNG was one of many Certain Dangerous Cargoes that add risk to the maritime environment, and the Coast Guard would address the risk from CDCs as a whole. We agree that there are other dangerous cargoes and it is logical for the Coast Guard to review them holistically in targeting its resources to where the risks are greatest. On the basis of its comments, the Coast Guard plans to examine the risk caused by dangerous commodities, and to take a number of steps to allocate resources. We will monitor the Coast Guard’s actions to see if these actions, collectively or in combination with a plan, allow it to optimally allocate its limited resource to meet growing security requirements along with its various other mission needs. Such a plan is important to ensure the best distribution of resources to meet the Coast Guard’s diverse responsibilities. Regarding our recommendation to develop national-level guidance to help ports plan how to mitigate economic consequences, particularly in the case of port closures, DHS generally concurred. It stated that its experience from Hurricane Katrina showed that disruptions to the maritime transportation system can have significant economic impacts and that these impacts need to be considered during recovery actions. It also stated that the Coast Guard, in partnership with CBP, is currently engaged in a broad effort to improve maritime recovery planning. While information on this effort was not provided to us during our review, according to its comment, the Coast Guard seems to recognize the problem and is taking action to address the basis of our concern. Regarding our recommendation to develop a detailed national operational plan that integrates spill and terrorism sections of the National Response Plan, both DHS and FBI generally concurred. They both stated, however, that the NRP itself already serves as the basis for integrating such response planning, and the FBI did not concur with the need to develop a separate operational plan. As we have noted in prior reports, effective planning and coordination require the development of detailed operational plans for response. While the NRP serves as a strategy-level doctrinal document, it is not an operational plan. We remain concerned that an intentional attack on an energy commodity tanker in a U.S. port may not be met by the best possible response without such a plan to direct the specific circumstance when both the spill and terrorism response sections of the NRP must be integrated and implemented simultaneously. Without a detailed operational plan for this situation, effective and efficient law enforcement investigation and environmental consequence mitigation may be hindered. As we have recently reported, the implementation of the NRP following Hurricane Katrina identified concerns with coordination within and between federal government entities using the NRP. Further, the October 2005 draft version of the MOTR called for DHS and DOJ to develop specific, detailed supporting operational plans for their responsibilities, in close consultation with other departments and agencies. However, this requirement was dropped from the October 2006 final version of the MOTR. As a result, no detailed operational plans exist for the situation described in the response section of this report. We believe our recommendation will help fill the guidance gap between doctrine and port-level operations. Regarding our recommendation to maximize terrorism and spill response planning and exercises at the local level for the best possible response, DHS generally concurred and FBI concurred. DHS said that while these efforts must be coordinated they need not be an amalgamation. It stated that there are opportunities for this coordination at the local committees that are responsible for planning terrorism and spill response and because the Coast Guard serves as chair for both committees, coordination already occurs. In its comments FBI listed exercises that combined terrorism and spill response. It also stated that local Maritime Liaison Agents were specifically directed to engage agency partners to ensure integration of FBI response. While these actions are beneficial for increased integration, there is no direct link between the actual local terrorism plan and spill response plan. Also, because terrorism response plans have distribution limited to those who need to know, many nonsecurity stakeholders— particularly in the spill response community—would not have access to these plans in an emergency, allowing for the possibility for these stakeholders to take actions that may hinder terrorism response. Regarding our recommendation that the Secretary of Homeland Security work with federal, state, and local stakeholders to develop explicit performance measures for emergency response capabilities, DHS responded that it was taking the recommendation under advisement and was exploring approaches to address our recommendation. We will follow up with DHS later to get its formal position on this recommendation. All of the respondents provided technical comments that we incorporated into the report as appropriate. Written comments from DHS are reproduced in appendix V, written comments from FBI are reproduced in appendix VI, and written comments from the Department of Defense are reproduced in appendix VII. As arranged with your office, unless you publicly announce its contents earlier, we plan on no further distribution of this report until 30 days after its issue date. At that time we will send copies of this report to the Secretary of Homeland Security, the Commandant of the U.S. Coast Guard, and the Attorney General. We will also make copies available to others at no charge at GAO’s Web site at http://www.gao.gov. This report was prepared by two teams within GAO, each of which concentrated on particular aspects of the assignment. If you or your staffs have any questions regarding (1) the types of threats to tankers carrying energy commodities and (2) the measures being taken to protect tankers and the challenges federal agencies face in making these actions effective, please call Stephen L. Caldwell at (202) 512-9610, or [email protected]. For questions regarding (1) the potential consequences of a successful attack on tankers or energy infrastructure or (2) the plans in place and the potential challenges in responding to an attack, please call Mark Gaffigan at (202) 512-3841, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IX. The objectives of this report were to (1) determine the types of terrorist threats to tankers carrying energy commodities and the potential consequences of a successful attack; (2) describe what measures are being taken both internationally and domestically to protect these tankers, and what challenges, if any, federal agencies face in making these actions effective; and (3) if a terrorist attack succeeds despite these protective measures, describe what plans are in place to respond and discuss the potential challenges federal agencies may face in responding to a future attack. To determine the types of terrorist threats to tankers carrying energy commodities, we conducted interviews with maritime intelligence officials from the U.S. Coast Guard and Navy at the National Maritime Intelligence Center. We also met with Coast Guard and Customs and Border Protection officials at headquarters and in the field responsible for port and vessel security to determine their views about maritime terrorism related to energy tankers and infrastructure. During site visits to domestic ports, we also interviewed operators of petroleum waterside facilities and tankers to determine their understanding of the threat environment. We also met with shipping and vessel management companies to discuss their views of the threats they face at foreign loading ports and while in transit to the United States. To gain an international perspective on threats to tankers and loading facilities, we conducted interviews with officials from international maritime organizations, international shipping and petroleum trade associations, vessel operators, vessel insurers, and private security and risk management organizations. We also reviewed classified intelligence documents, including port threat assessments, and government directives related to maritime security. Continuing with our first objective to describe the potential public safety, environmental, and economic consequences of a successful terrorist attack on a waterside energy facility or tanker, we met with officials from the Department of Energy, the Environmental Protection Agency, the U.S. Maritime Administration, the Coast Guard, and the Federal Energy Regulatory Commission. In addition, we conducted a panel study with academic and industry experts to specifically determine the consequences of an attack on a liquefied natural gas (LNG) tanker. We also visited major petroleum, LNG, and liquefied petroleum gas terminals to discuss possible consequences of attacks at these locations. We also analyzed import data from U.S. government sources for petroleum and other energy commodities into the United States and the ports receiving the imports. Finally, we reviewed published information, such as studies and scholarly articles, to determine the environmental and public health and safety consequences of a terrorist attack to a petroleum waterside facility or tanker. To describe measures that are being taken to protect these tankers, and what challenges, if any, federal agencies face in making these actions effective, we interviewed a variety of foreign and domestic government officials and private industry representatives. To determine the actions taken in foreign nations, we visited four countries. The selection criteria for our overseas site visits were the amount of energy commodities exported to the United States and the opportunity to learn about maritime anti-terrorism best practices. At the countries we visited we conducted interviews with government officials responsible for maritime security activities and petroleum waterside facility and tanker operators. We also obtained information from the Coast Guard, international maritime organizations, tanker operators, vessel management companies, and insurers to understand port and vessel security practices and procedures overseas and while tankers are in transit to the United States. To determine the actions taken domestically, we met with officials in the Departments of Homeland Security, Defense, State, Energy, Transportation, and Justice; private sector facility and vessel operators; and state and local officials dealing with homeland security, emergency response, and law enforcement. We also conducted site visits to a nonprobability sample of petroleum and liquefied gas import and export facilities in the United States. During our site visits we observed security practices and conducted interviews with representatives of federal agencies that oversee the security of the energy facilities, as well as facility security officers and relevant local and state law enforcement officials. The information obtained from these site visits cannot be generalized to all petroleum and liquefied gas import and export facilities nationwide. We also reviewed government and industry documents and data sources relevant to domestic actions taken by agencies and companies to prevent terrorist attacks. To establish criteria for evaluating the Coast Guard’s ability to mitigate the risk of maritime terrorism, we obtained 9 months of Operation Neptune Shield (ONS) Scorecard security performance data— the Coast Guard’s performance measurement tool for tracking performance in meeting security activities at the nations most strategically important ports—from select Coast Guard field units covering the months of November 2005 through July 2006. We chose to review scorecard data for ports that the U.S. Maritime Administration identified as being top ports for receiving energy commodity tankers. We calculated the ONS 9- month average of both the monthly activity requirement attainment percentages and share of workload conducted by other government agencies. In conducting this work, we met with Coast Guard headquarters personnel on several occasions to further our understanding. We also asked Coast Guard officials responsible for the scorecard data what steps they took to ensure the reliability of the data and determined that they were sufficiently accurate for our purposes. To describe what plans are in place for responding to a terrorist attack, should one occur despite protective measures, and discuss the challenges federal agencies may face in responding, we conducted interviews with officials from the Departments of Homeland Security and Justice; the Environmental Protection Agency; as well as officials representing port authorities, state and local offices of public safety and emergency management, oil and gas facilities, and first responders, including police and fire departments. These interviews were conducted to identify spill, terrorism, and economic response plans and priorities; mechanisms for response coordination; access to resources; training availability; types of exercises conducted; potential communications challenges; performance metrics; and information-sharing systems. During our site visits, we observed port operations and the working relationships between some government and private stakeholders. To assess the integration of national and local spill and terrorism response plans, we gathered and reviewed identified plans. Finally, we interviewed emergency response officials and reviewed after-action reports to identify best practices and lessons learned as a result of emergency response exercises and incidents. We conducted our work from April 2005 to February 2007 in accordance with generally accepted government auditing standards. Crude oil is used to produce a wide array of petroleum products, including gasoline, diesel and jet fuels, heating oil, lubricants, asphalt, plastics, and many other products used for their energy or chemical content. Crude oils range from very light (high in gasoline) to very heavy (high in residual oils). Sour crude is high in sulfur content. Sweet crude is low in sulfur and therefore often more valuable than other kinds. A complex mixture of relatively volatile hydrocarbons with or without small quantities of additives, blended to form a fuel suitable for use in spark-ignition engines. Motor gasoline includes conventional gasoline; all types of oxygenated gasoline, including gasohol; and reformulated gasoline, but excludes aviation gasoline. A refined petroleum product used in jet aircraft engines. Kerosene-type jet fuel is used for commercial and military turbojet and turboprop aircraft engines. Naphtha-type jet fuel is used primarily for military turbojet and turboprop aircraft engines because it has a lower freeze point than other aviation fuels and meets engine requirements at high altitudes and speeds. A natural gas that has been cooled to minus 260 degrees Fahrenheit to a liquid state so that it can be transported. Consists almost entirely of methane (85-95 percent) along with small concentrations of ethane, propane, butane, and trace amounts of nitrogen. Mainly used as fuel for electricity generation, home heating, industrial manufacturing, and, to a lesser extent, motor vehicles. Group of hydrocarbons, such as propane and butane, derived mainly as a byproduct of oilfield production and crude oil refining processes. The vast majority of LPG traded internationally consists of propane and butane cargo. LPG has a variety of agricultural, household, petrochemical, and, to a lesser extent, vehicle fuel applications. Nigerian militants attacked an energy facility and abducted foreign oil workers in the oil-rich Niger delta. The Movement for the Emancipation of the Niger Delta is responsible for a wave of militant attacks in Nigeria. Two cars packed with explosives tried to attack a major oil processing facility in Saudi Arabia’s eastern province. Al Qaeda suicide attackers were killed along with two Saudi guards. Closely timed suicide boat attacks on northern Persian Gulf oil terminals in Iraq left two Navy sailors and one Coast Guardsman dead and five others injured. The Free Aceh Movement claimed responsibility for hijacking the M/V Penrider, a fully laden tanker shipping fuel oil in Southeast Asia. Three hostages were eventually released following a ransom payment. Ten pirates boarded tanker from a speedboat. Pirates took the helm, altered the speed, disabled ship’s radio, and steered the vessel for an hour. Pirates left with cash and abducted captain and first officer. Small boat filled with explosives rammed the side of the French-flagged oil tanker Limburg as it was approaching the Ash Shihr Terminal several miles off the coast of Yemen. The suicide attack killed one crew member and 90,000 barrels of oil spilled. Risk management is a systematic approach for analyzing risk and deciding how best to address it. Because resources are limited and cannot eliminate all risks, careful choices need to be made in deciding which actions yield the greatest benefit. Figure 19 depicts a risk management framework that is our synthesis of government requirements and prevailing best practices previously reported. To be effective, this process must be repeated when threats or conditions change to incorporate any new information to adjust and revise the assessments and actions. Setting strategic goals, objectives, and constraints is a key first step in implementing a risk management approach and helps to ensure that management decisions are focused on achieving a strategic purpose. These decisions should take place in the context of an agency’s strategic plan that includes goals and objectives that are clear, concise, and measurable. Risk assessment, a critical step in the approach, helps decision makers identify and evaluate potential risks so that countermeasures can be designed and implemented to prevent or mitigate the effects of risk. Risk assessment is a qualitative and/or quantitative determination of the likelihood of an adverse event occurring and the severity, or impact, of its consequences. Risk assessment in a homeland security application often involves assessing three key elements—threat, criticality, and vulnerability: A threat assessment identifies and evaluates potential threats on the basis of factors such as capabilities, intentions, and past activities. A criticality or consequence assessment evaluates and prioritizes assets and functions in terms of specific criteria, such as their importance to public safety and the economy, as a basis for identifying which structures or processes are relatively more important to protect from attack. A vulnerability assessment identifies weaknesses that may be exploited by identified threats and suggests options to address those weaknesses. Information from these three assessments contributes to an overall risk assessment that characterizes risks on a scale such as high, medium, or low and provides input for evaluating alternatives and management prioritization of security initiatives. The next two steps involve deciding what mitigation measures to adopt. Alternatives evaluation considers what actions may be needed to address identified risks, the associated costs of taking these actions, and any resulting benefits. This information is provided to agency management to aid in completing the next step—selecting alternative actions best suited to the unique needs of the organization. The final step in the approach involves implementing the selected actions and evaluating the extent to which they mitigate risk. This involves developing criteria for monitoring the performance of these actions and follow-up to ensure that these actions are effective and reflect evolving risk. Risk management has received widespread support from Congress, the President, and the Secretary of Homeland Security as a tool that can help set priorities and inform decisions about mitigating risks. In addition to the contacts named above, Jonathan Bachman, Jason Berman, Steven Calvo, Jonathan Carver, Frances Cook, Frank Chase Cook, Amy Higgins, David Lysy, Jean McSween, Erica Miles, Jobenia Odum, Josh Ormond, Janice Poling, Franklin Rusco, Peter Singer, Carol Shulman, Stan Stenerson, Barbara Timmerman, James Turkett, JimWells, and Margaret Wrightson made key contributions to this report. Maritime Security: The SAFE Port Act: Status and Implementation One Year Later. GAO-08-126T. Washington, D.C.: October 30, 2007. Maritime Security: One Year Later: A Progress Report on the SAFE Port Act. GAO-08-171T. Washington, D.C.: October 16, 2007. Maritime Security: The SAFE Port Act and Efforts to Secure Our Nation’s Seaports. GAO-08-86T. Washington, D.C.: October 4, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-1240T. Washington, D.C.: September 18, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-1081T. Washington, D.C.: September 6, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-454. Washington, D.C.: August 17, 2007. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-1142T. Washington, D.C.: July 31, 2007. Information on Port Security in the Caribbean Basin. GAO-07-804R. Washington, D.C.: June 29, 2007. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-835T. Washington, D.C.: May 15, 2007. Homeland Security: Management and Programmatic Challenges Facing the Department of Homeland Security. GAO-07-833T. Washington, D.C.: May 10, 2007. Maritime Security: Observations on Selected Aspects of the SAFE Port Act. GAO-07-754T. Washington, D.C.: April 26, 2007. Port Risk Management: Additional Federal Guidance Would Aid Ports in Disaster Planning and Recovery. GAO-07-412. Washington, D.C.: March 28, 2007. Maritime Security: Public Safety Consequences of a Ter a Tanker Carrying Liquefied Natural Gas Need Clarification. GAO-07- 316. Washington, D.C.: February 23, 2007. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation's Preparedness, Response, and Recovery System. GAO-06-618. Washingt D.C.: September 6, 2006. Coast Guard: Non-Homeland Security Performance Measures Are Generally Sound, but Opportunities for Improvement Exist. GAO-06- Washington, D.C.: August 16, 2006. 816. Coast Guard: Observations on the Preparation, Response, and Recovery Missions Related to Hurricane Katrina. GAO-06-903. Washington, D.C.: July 31, 2006. Maritime Security: Information Sharing Efforts Are Improving. GAO- 06-933T. Washington, D.C.: July 10, 2006. Energy Security: Is Oil Production. GAO-06-668. Washington, D.C.: June 27, 2006. sues Related to Potential Reductions in Venezuelan Coast Guard: Observations on Ag Future Challenges. GAO-06-448T. Washington, D.C.: June 15, 2006. ency Performance, Operations, and Emergency Preparedness and Response: Some Issues and Challenge Associated with Major Emergency Incidents. GAO-06-467T. Washington D.C.: February 23, 2006. Homeland Security: DHS Is Taking Steps to Enhance Security at Chemical Facilities, but Additional Authority Is Needed. GAO-06-150. Washington, D.C.: January 27, 2006. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 2005. Border Security: Strengthened Visa Process Would Benefit from Additional Management Actions by State and DHS. GAO-05-859. Washington, D.C.: September 13, 2005. Maritime Security: Enhancements Made, but Implementation and Sustainability Remain Key Challenges. GAO-05-448T. Washington, D.C. May 17, 2005. Maritime Security: New Structures Have Improved Information Sharing, but Security Clearance Processing Requires Further Attention GAO-05-394. Washington, D.C.: April 15, 2005. . Coast Guard: Observations on Agency Priorities in Fiscal Year 2006 Budget Request. GAO-05-364T. Washington, D.C.: March 17, 2005. Coast Guard: Station Readiness Im Management Concerns Remain. GAO-05-161. Washington, D.C.: January 31, 2005. proving, but Resource Challenges and Homeland Sec Exercises Needs Further Attention. GAO-05-170. Washington, D.C.: January 14, 2005. urity: Process for Reporting Lessons Learned from Seaport Port Security: Better Planning Needed to Develop and Operate Maritime Worker Identification Card Program. GAO-05-106. Washington, D.C.: December 10, 2004. Maritime Security: Better Planning Needed to Help Ensure an Effectiv Port Security Assessment Program. GAO-04-1062. Washington, D.C.: September 30, 2004. Maritime Security: Partnering Could Reduce Federal Costs and Facilitate Implementation of Automa GAO-04-868. Washington, D.C.: July 23, 2004. tic Vessel Identification System. Maritime Security: Substantial Work Remains to Translate New Planning Requirements into Effective Port Security. GAO-04-838. Washington, D.C.: June 30, 2004. Coast Guard: Key Management and Budget Challenges for Fiscal Yea 2005 and Beyond. GAO-04-636T. Washington, D.C.: April 7, 2004. Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspection. GAO-04-557T. Wash D.C.: March 31, 2004. Homeland Security: Preliminary Observations on Efforts to Target Security Inspections of Cargo Containers. GAO-04-325T. Washington, D.C.: December 16, 2003. Maritime Security: Progress Made in Implementing Maritime Transportation Security Act, but Concerns Remain. GAO-03-1155T. Washington, D.C.: September 9, 2003. Homeland Security: Efforts to Improve Information Sharing Need to B Strengthened. GAO-03-760. Washington, D.C.: August 27, 2003. Container Security: Expansion of Key Customs Programs Will Require Greater Attention to Critical Success Factors. GAO-03-770. Washington, D.C.: July 25, 2003. Homeland Security: Challenges Facing the Department of Homeland Security in Balancing Its Border Security and Trade Facilitation Missions. GAO-03-902T. Washington, D.C.: June 16, 2003. Transportation Security: Post-September 11th Initiatives and Long- Term Challenges. GAO-03-616T. Washington, D.C.: April 1, 2003. | U. S. energy needs rest heavily on ship-based imports. Tankers bring 55 percent of the nation's crude oil supply, as well as liquefied gases and refined products like jet fuel. This supply chain is potentially vulnerable in many places here and abroad, as borne out by several successful overseas attacks on ships and facilities. GAO's review addressed (1) the types of threats to tankers and the potential consequences of a successful attack, (2) measures taken to protect tankers and challenges federal agencies face in making these actions effective, and (3) plans in place for responding to a successful attack and potential challenges stakeholders face in responding. GAO's review spanned several foreign and domestic ports, and multiple steps to analyze data and gather opinions from agencies and stakeholders. The supply chain faces three main types of threats--suicide attacks such as explosive-laden boats, "standoff" attacks with weapons launched from a distance, and armed assaults. Highly combustible commodities such as liquefied gases have the potential to catch fire or, in a more unlikely scenario, explode, posing a threat to public safety. Attacks could also have environmental consequences, and attacks that disrupt the supply chain could have a severe economic impact. Much is occurring, internationally and domestically, to protect tankers and facilities, but significant challenges remain. Overseas, despite international agreements calling for certain protective steps, substantial disparities exist in implementation. The United States faces limitations in helping to increase compliance, as well as limitations in ensuring safe passage on vulnerable transport routes. Domestically, units of the Coast Guard, the lead federal agency for maritime security, report insufficient resources to meet its own self imposed security standards, such as escorting ships carrying liquefied natural gas. Some units' workloads are likely to grow as new liquefied natural gas facilities are added. Coast Guard headquarters has not developed plans for shifting resources among units. Multiple attack response plans are in place to address an attack, but stakeholders face three main challenges in making them work. First, plans for responding to a spill and to a terrorist threat are generally separate from each other, and ports have rarely exercised these plans simultaneously to see if they work effectively together. Second, ports generally lack plans for dealing with economic issues, such as prioritizing the movement of vessels after a port reopens. The President's maritime security strategy calls for such plans. Third, some ports report difficulty in securing response resources to carry out planned actions. Federal port security grants have generally been directed at preventing attacks, not responding to them, but a more comprehensive risk-based approach is being developed. Decisions about the need for more response capabilities are hindered, however, by a lack of performance measures tying resource needs to effectiveness in response. |
Section 861 of the NDAA for FY2008 directed the Secretary of Defense, the Secretary of State, and the USAID Administrator to sign an MOU related to contracting in Iraq and Afghanistan. The law specified a number of issues to be covered in the MOU, including the identification of each agency’s roles and responsibilities for matters relating to contracting in Iraq and Afghanistan, responsibility for establishing procedures for the movement of contractor personnel in the two countries, responsibility for collecting and referring information related to violations of the Uniform Code of Military Justice (UCMJ) or the Military Extraterritorial Jurisdiction Act (MEJA), and identification of common databases to serve as repositories of information on contract and contractor personnel. The NDAA for FY2008 requires the databases to track at a minimum: for each contract, a brief description of the contract, its total value, and whether it was awarded competitively; and for contractor personnel working under contracts in Iraq or Afghanistan, total number employed, total number performing security functions, and total number who have been killed or wounded. DOD, State, and USAID signed the MOU in July 2008. The agencies agreed that SPOT, a Web-based system initially designed and used by DOD, would be the system of record for the statutorily-required contract and contractor personnel information. The MOU specified that SPOT would include information on DOD, State, and USAID contracts with more than 14 days of performance in Iraq or Afghanistan or valued at more than the simplified acquisition threshold, which the MOU stated was $100,000, as well as information on the personnel working under those contracts. In contrast, the NDAA for FY2008 established a 14-day threshold for inclusion in the database but did not specify a minimum dollar value. As agreed in the MOU, DOD is responsible for all maintenance and upgrades to the SPOT database. The agencies further agreed to negotiate funding arrangements for any agency-unique requirements and for specialized training requirements. Each agency is to ensure that data elements related to contractor personnel, such as the number of personnel employed on each contract in Iraq or Afghanistan, are entered into SPOT and to require its contractors to enter that information accurately. Information entered into SPOT is more detailed than the number of contractor personnel as it is designed to track individuals by name and record information such as the contracts they are working under, deployment dates, and next of kin. Data elements, such as contract value and whether it was awarded competitively, are to be imported into SPOT from FPDS-NG, the federal government’s system for tracking information on contracting actions. While implementation of SPOT is still under way, DOD, State, and USAID’s criteria for deciding which contractor personnel to enter into the system differed from what was agreed to in the MOU and varied by country. This has resulted in not all contractor personnel being entered into SPOT as agreed to in the MOU. Further, SPOT currently does not have the capability to track all of the required contract information or readily generate reports on the total number of killed or wounded contractor personnel. For the majority of our review period, DOD, State, and USAID were phasing in the MOU requirement to use SPOT to track information on contracts and the personnel working on them in Iraq and Afghanistan. In January 2007, DOD designated SPOT as its primary system for collecting data on contractor personnel deployed with U.S. forces and directed contractor firms to enter personnel data for contracts performed in Iraq and Afghanistan. State started systematically entering information for both Iraq and Afghanistan into SPOT in November 2008. In January 2009, USAID began requiring contractors in Iraq to enter personnel data into SPOT. However, USAID has not yet imposed a similar requirement on its contractors in Afghanistan and has no time frame for doing so. In implementing SPOT, DOD’s, State’s, and USAID’s criteria for determining which contractor personnel are entered into SPOT varied and were not consistent with those contained in the MOU, as the following illustrate. Regarding contractor personnel in Iraq, DOD, State, and USAID officials stated that the primary factor for deciding to enter contractor personnel into SPOT was whether a contractor needed a SPOT-generated letter of authorization (LOA). Contractor personnel need SPOT-generated LOAs to, among other things, enter Iraq, receive military identification cards, travel on U.S. military aircraft, or, for security contractors, receive approval to carry weapons. However, not all contractor personnel, particularly local nationals, in Iraq need LOAs and agency officials informed us that such personnel were not being entered into SPOT. In contrast, DOD officials informed us that individuals needing LOAs were entered into SPOT even if their contracts did not meet the MOU’s 14-day or $100,000 thresholds. For Afghanistan, DOD offices varied in their treatment of which contractor personnel should be entered into SPOT. Officials with one contracting office stated that the need for an LOA determined whether someone was entered into SPOT. As in Iraq, since local nationals generally do not need LOAs, they are not being entered into SPOT. In contrast, DOD officials with another contracting office stated that they follow DOD’s 2007 guidance on the use of SPOT. According to the guidance, contractor personnel working on contracts in Iraq and Afghanistan with more than 30 days of performance and valued over $25,000 are to be entered into SPOT—as opposed to the MOU threshold of 14 days of performance or valued over $100,000. Agency officials have raised questions about the need to enter detailed information into SPOT on all contractor personnel. Some DOD officials we spoke with questioned the need to individually track all contractor personnel as opposed to their total numbers given the cost of collecting these detailed data compared to the benefit of having this information. Similarly, USAID officials questioned the need to enter detailed information as agreed to because personnel working on its contracts in Afghanistan generally do not live or work in close proximity to U.S. government personnel and typically do not receive support services from the U.S. government. USAID officials also cited security concerns as one factor affecting their decision on who should be entered into SPOT. USAID officials explained that they have held off entering Iraqi or Afghan nationals into SPOT because identifying local nationals who work with the U.S. government by name could put those individuals in danger should the system be compromised. To help address this concern, DOD officials said that they have begun developing a classified version of SPOT. However, USAID officials told us the agency would most likely not be able to use a classified system due to limited access to classified computers. Because of the varying criteria on who should be entered into the system, the information in SPOT does not present an accurate picture of the total number of contractor personnel in Iraq and Afghanistan. For example, officials from all three agencies expressed confidence that the SPOT data were relatively complete for contractor personnel who need an LOA in Iraq. Conversely, agency officials acknowledged that SPOT does not fully reflect the number of local nationals working on their contracts. Agency officials further explained that ensuring that information on local nationals is in SPOT is challenging because their numbers tend to fluctuate due to the use of day laborers and because local firms do not always keep track of the individuals working on their projects. DOD officials also explained that they have had to develop workarounds to deal with the fact that SPOT requires a first and last name to be entered for each individual along with a birth date and unique identification number. The officials noted that many Afghan laborers have only one name, do not know their birth dates, and lack identification numbers. SPOT currently lacks the capability to track all of the contract data elements as agreed to in the MOU. While the MOU specifies that contract values, competition information, and descriptions of the services being provided would be pulled into SPOT from FPDS-NG, this capability is not expected to be available until 2010. In the interim, the DOD officials overseeing SPOT’s development told us that SPOT users can manually enter competition information and descriptions, but there is no requirement for them to do so. Since SPOT is not designed to let users enter contract dollar values, the DOD officials stated that SPOT and FPDS- NG are being periodically merged to identify contract values. Even when the direct link is established, pulling FPDS-NG data into SPOT may present challenges because of how data are entered into SPOT. First, information from the two systems can only be merged if the contract has been entered into SPOT. If no contractor personnel working on a particular contract have been entered, then the contract will not appear in SPOT and its information cannot be linked with the information in FPDS- NG. Second, while contract numbers are the unique identifiers that will be used to match records in SPOT to those in FPDS-NG, SPOT users are not required to enter contract numbers in a standardized manner. In our review of SPOT data, we determined that at least 12 percent of the contracts had invalid contract numbers and, therefore, could not be matched to records in FPDS-NG. Additionally, contract numbers may not be sufficient to identify unique contracts. Specific orders placed on task order contracts are identified through a combination of the contract number and task order number. However, SPOT users are not required to enter task order numbers. For example, one SPOT entry only contained the contract number without an order number. In reviewing FPDS-NG data, we determined that DOD had placed 12 different orders—ranging from a few thousand dollars to over $129 million—against that contract. Based on the information in SPOT, DOD would not be able to determine which order’s value and competition information should be imported from FPDS-NG. SPOT, as currently designed, also lacks the capability to readily generate reports on the number of killed or wounded contractor personnel. SPOT was upgraded in January 2009 to fulfill the NDAA for FY2008 requirement to track such information. Contractors can now update the status of their personnel in the system, including whether they have been killed or wounded, while agencies can run queries to identify the number of personnel with a current status of killed or wounded. However, the standard queries can only generate a list of personnel currently identified as killed or wounded and cannot be used to identify individuals who previously had the status of killed or wounded and whose records have become inactive or whose injured status changed when they returned to work. For example, if an individual has an injured status today and a query were run, that individual would be included in the report. If that individual then returned to work, the status would change and that individual would not appear on any subsequent injury reports, with the agencies having no means of determining whether the individual was ever injured. DOD, State, and USAID reported to us that there were 226,475 contractor personnel, including 27,603 performing security functions, in Iraq and Afghanistan as of the second quarter in fiscal year 2009. Over the period of our review, DOD reported significantly more contractors than State and USAID, most of whom were working in Iraq. For example, as of the second quarter in fiscal year 2009, DOD reported over 200,000 contractor personnel while State and USAID reported almost 9,000 and over 16,500, respectively. However, due to limitations with the reported data, we determined the data reported by the agencies should not be used to identify trends or draw conclusions about the number of contractor personnel in either country. Specifically, we found that personnel information reported by the three agencies was incomplete and, for DOD, additional factors raise questions about the reported numbers’ reliability. Further, the agencies could not verify whether the reported data were accurate or complete; although, they indicated that the data for certain types of contractors, such as those providing security functions, were more complete than other data, such as those for local nationals. DOD Contractor Personnel According to DOD officials, the most comprehensive information on the number of DOD contractor personnel in Iraq and Afghanistan comes from the U.S. Central Command’s (CENTCOM) quarterly census. CENTCOM initiated its quarterly census of contractor personnel in June 2007 as an interim measure until SPOT is fully implemented. The census relies on contractor firms to report their personnel data to DOD components, which then aggregate the data and report them to CENTCOM at the end of each quarter. As shown in table 1, DOD’s reported number of contractor personnel for our review period ranged from 200,111 to 231,698, with approximately 7 percent performing security functions over the entire period, on average. DOD officials acknowledge that the census numbers represent only a rough approximation of the actual number of contractor personnel that worked in either country. Specifically, these officials told us that because of how the data were collected and reported by the various DOD components, it was difficult to compile and obtain an accurate count of contractor personnel. We determined that over the course of our review period the following data issues existed. Contractor personnel information was sometimes incomplete. Most notably, an Army-wide review of fiscal year 2008 third quarter census data determined that the U.S. Army Corps of Engineers did not include approximately 26,000 Afghan nationals working on contracts. However, information on these contractors was included in subsequent censuses. As a result, comparing third quarter and fourth quarter data would incorrectly suggest that there was an increase in the number of contractors in Afghanistan, when in fact the increase is attributable to more accurate counting of personnel. Contractor personnel were being double counted. For example, the system used to record contractor personnel numbers for the Joint Contracting Command-Iraq/Afghanistan was found to have duplicates. As a result, DOD reported a 10 percent decrease in personnel in Iraq in the first quarter of fiscal year 2009 and a 5 percent decrease in contractor personnel in Afghanistan in the second quarter of fiscal year 2009 when duplicates were removed. The process used to collect data changed. For example, a 3 percent decrease in personnel numbers reported in the first quarter of fiscal year 2009 compared to the previous quarter was attributed to the Joint Contracting Command-Iraq/Afghanistan’s decision to begin using a monthly data call to contractors to collect personnel numbers. Data submitted by the DOD components were often of poor quality or inaccurate, which created challenges for CENTCOM to compile quarterly totals. During our review of quarterly census data submissions, we identified a DOD component in Afghanistan that provided invalid contract numbers for about 30 percent of its contracts in the second quarter for fiscal year 2009. Also, it was not possible to determine for some submissions how many contractors were working in a specific country. In such cases, the CENTCOM official responsible for the census told us he would either seek clarification from the DOD component that provided the data or use his judgment to determine the correct personnel numbers. In response to our request for information on its contractor personnel in Iraq and Afghanistan, State officials informed us that prior to fiscal year 2009 the department did not systematically track contractor personnel. Instead, State bureaus conducted periodic surveys of their contractors; however, each bureau’s survey covered different time periods. Based on these surveys, which at least one bureau supplemented with SPOT data, State reported that 8,971 contractor personnel, the majority of whom performed security functions, worked on contracts in Iraq and Afghanistan during the first half of fiscal year 2009. Only one bureau provided comparable information for fiscal year 2008, reporting 3,514 personnel working on its contracts in Iraq and Afghanistan over the course of the year. Even relying on a combination of periodic surveys and SPOT, which State implemented in fiscal year 2009, it appears that State underreported its contractor personnel numbers. Specifically, in our analysis of State contract and personnel data, we identified a number of contracts with performance in Iraq or Afghanistan for which contractor personnel numbers were not reported. For example, although State provided obligation data on a $3 million contract for operation and maintenance services in Iraq as well as a $5.6 million contract for support services in Afghanistan, information on the number of personnel working on these contracts was not contained in the agency’s periodic surveys or the SPOT data we received. For the personnel numbers reported to us, USAID relied entirely on periodic surveys of its contractors. USAID provided contractor personnel numbers for both Iraq and Afghanistan for all of fiscal year 2008 and the first half of fiscal year 2009. The agency reported that 16,697 personnel, including 5,097 performing security functions, worked on its contracts in Iraq and Afghanistan during the first half of fiscal year 2009. USAID relied on the results of surveys sent to its contractors in Iraq and Afghanistan to respond to our request for contractor personnel information. However, this information appeared to be incomplete. Specifically, agency officials acknowledged the periodic surveys most likely underreported the total number of contractor personnel. For example, an official in Afghanistan informed us that if a USAID contractor firm did not respond to a survey for personnel information, which is sometimes the case since there is no contractual requirement to do so, then personnel working for that firm were not included in the reported numbers. Our analysis of USAID personnel and contract data also indicates that USAID’s numbers are incomplete. Specifically, USAID provided us with personnel data for about 83 percent of its contracts that were active during the period of our review and had performance in Iraq or Afghanistan. We identified a number of contracts for which contractor personnel information was not provided, including contracts to refurbish a hydroelectric power plant and to develop small and medium enterprises in Afghanistan worth at least $6 million and $91 million, respectively. DOD, State, and USAID could not verify the accuracy or completeness of the contractor personnel data they provided to us, and officials acknowledged that they are likely undercounting the actual number of contractors working in Iraq and Afghanistan. Officials from the three agencies stated they lack the resources to verify the information being reported by their contractors, their primary source of data. Officials we met with indicated this is particularly true for contracts that involve work at remote sites, where security conditions make it difficult for U.S. government officials to regularly visit. However, the agency officials stated that personnel information on certain types of contractors is likely more reliable than others. In particular, officials from DOD, State, and USAID told us that the personnel numbers provided for their private security contractors are the most accurate and reliable. This is due in part to the increased scrutiny these contractors receive. Conversely, these same officials told us obtaining accurate information on local nationals is especially difficult. For example, one DOD official told us some local national contractors hesitate or simply refuse to submit information on their personnel because of safety concerns, among others. Further, the number of local nationals working on a particular contract on a daily basis can vary greatly depending on the type of work being performed. Despite the limitations we identified with the agencies’ use of surveys, the survey data were more complete than the data in SPOT for our review period. For example, as shown in table 4, in the second quarter fiscal year 2009 census, DOD reported 83,506 more contractor personnel in Iraq and Afghanistan than were entered into SPOT. An even smaller portion of USAID’s contractor personnel were entered into SPOT because the agency did not enter any personnel for any contracts in Afghanistan and was generally not entering Iraqis into the system. While the difference between SPOT and the surveys was smaller for State, there still were a number of contracts for which personnel information was available from State’s surveys but was not in SPOT. Although USAID, State, and DOD are required to collect data on the total number of contractor personnel who have been killed or wounded while working on contracts in Iraq and Afghanistan, only USAID and State tracked this information during our review period. USAID reported 59 contractor personnel were killed and 61 wounded during fiscal year 2008 and the first half of fiscal year 2009, while State reported that 5 of its contractors were killed and 98 more were wounded (see table 5). These data were based on reports submitted by contractors and then tracked by the agencies. In tracking this information, USAID and State noted in some cases, but not all, whether the death or injury was the result of a hostile action or an accident. However, due to the lack of other available and reliable sources, we could not independently verify whether USAID’s and State’s data were accurate. DOD officials informed us that their department continued to lack a system for tracking information in a manner that would allow the department to provide us with reliable data on killed or wounded contractor personnel. Although DOD did not maintain departmentwide data, some individual components within the department received reports on killed or wounded contractor personnel. However, the components did not consistently track these reports in a readily accessible or comprehensive manner. For example, officials with the Defense Contract Management Agency in Iraq and the Joint Contracting Command – Iraq/Afghanistan explained that they received reports when contractor personnel were killed or wounded, but this information was not recorded in a manner that made it readily retrievable. In addition, an Army Corps of Engineers official in Afghanistan told us that he tracked data on contractor illnesses and injuries resulting from workplace accidents but did not track data on contractor personnel killed or wounded as a result of hostile incidents. Absent DOD-wide data and as was the case for our prior report, DOD officials referred us to Defense Base Act (DBA) case data, which are maintained by the Department of Labor, as a means of obtaining information on killed and wounded contractor personnel. Labor’s DBA case data do not provide an appropriate basis for determining the number of contractor personnel killed or wounded in Iraq and Afghanistan while working on DOD, State, or USAID contracts. Under the NDAA for FY2008, Labor—unlike DOD, State, and USAID—has no responsibilities for tracking killed or wounded contractor personnel, and as such, its data were not designed to do so. Instead, Labor maintains data on DBA cases to fulfill its responsibilities for overseeing DBA claims by providing workers’ compensation protection to contractor personnel killed or injured while working on U.S. government contracts overseas, including those in Iraq and Afghanistan. After analyzing Labor’s DBA data and case files, we determined that DBA data are not a good proxy for determining the number of killed and wounded contractor personnel. This is, in part, because, as Labor officials explained, not all deaths and injuries reported under DBA would be regarded as contractors killed or wounded within the context of the NDAA for FY2008. Many nonhostile-related deaths and injuries, such as strains, sprains, and cases arising from auto accidents and other common occupational injuries, are compensable under DBA and are routinely reported to Labor. In addition, during our file reviews, we noted that many cases, particularly those submitted for injuries, were for medical conditions, such as pregnancy, cancer, and appendicitis, determined not to be related to the individual’s employment in Iraq or Afghanistan, and compensation claims for many of these cases were denied because the conditions were not work-related. While employers must notify Labor of all work-related contractor deaths and injuries resulting in time lost from work, one Labor official told us that some employers report all medical- related conditions, regardless of their severity and the nature of the incidents that caused them. In addition, some contractor deaths and injuries may not be reported to Labor as required. In particular, Labor officials have indicated that deaths and injuries to local and third-country contractors may be underreported. Additionally, because Labor does not track cases by agency or contract, DBA data cannot be analyzed to determine how many cases involved contractor personnel working specifically on DOD, State, or USAID contracts. As a result, the data may include cases for contractor personnel working for agencies other than DOD, State, and USAID. During our review of 150 DBA case files, we noted that the files did not always contain contract information and did not consistently identify the contracting agency. While we identified 103 case files for personnel working on DOD or State contracts, we did not identify any files for USAID contractor personnel. In addition, 1 case file specified an agency other than DOD, State, or USAID, while 46 files did not specify which agency the contractor worked for. Despite their limitations for determining the number of contractor personnel killed or wounded, Labor’s DBA case data provide insight into contractor personnel deaths and injuries in Iraq and Afghanistan. According to Labor, there were 11,804 DBA cases, including 218 cases reporting contractor deaths, which resulted from incidents that occurred in Iraq and Afghanistan during fiscal year 2008 and the first half of fiscal year 2009. As shown in table 6, overall both the total number of DBA cases and the number of death cases decreased from fiscal year 2007 to fiscal year 2008, though the number of death cases in Afghanistan increased. Based on our review of 150 randomly selected DBA case files, we estimated that about 11 percent of the deaths and injuries reported to Labor for incidents that occurred in fiscal year 2008 resulted from hostile actions. Only 16 of the 150 files we reviewed were for cases related to hostile actions. Further, about one-third of the 11,586 DBA injury cases that occurred during our review period resulted in the affected contractor losing time from work. For example, we reviewed a case in which a contractor lost time from work after receiving multiple injuries when an ammunition pallet fell and wedged him against the side of a container, while another contractor suffered fractures and spinal injuries caused by an improvised explosive device and small arms fire. DOD, State, and USAID reported obligating nearly $39 billion on 84,719 contracts with performance in Iraq and Afghanistan during fiscal year 2008 and the first half of fiscal year 2009 (see fig. 1 for obligation data). DOD accounted for the vast majority of both the contracts and obligations. Approximately two-thirds of the total number of contracts and obligations were for performance in Iraq. Task orders were the most common contract vehicle that the agencies used during our review period and accounted for most of the obligations. A relatively small number of task orders accounted for a large portion of each agency’s obligations. For example, during our review period, DOD obligated more than $6.5 billion on two task orders that provide food, housing, and other services for U.S. military personnel, while more than a third of State’s obligations were on three task orders for police training and criminal justice programs in Iraq and Afghanistan. See appendix II for detailed information on each agencies’ Iraq and Afghanistan contracts and obligations during our review period. The NDAA for FY2008 mandated that we identify the total number and value of all contracts, defined to include prime contracts, task or delivery orders, and subcontracts at any tier. While we obtained data on prime contracts and orders, DOD, State, and USAID were unable to provide data on the number or value of individual subcontracts. Contract files may contain information on subcontracts, but none of the agencies systematically tracked this information. The value of subcontracts is captured in the total value of the prime contract, but the agencies were unable to provide us with data on what portion of the total contract value went to subcontractors. Of the almost 85,000 contracts, including task and delivery orders, which were active during our review period, 97 percent were awarded during fiscal year 2008 and the first half of fiscal year 2009. However, more than a third of the funds obligated during our review period were on contracts originally awarded before fiscal year 2008. There were some variations between the agencies, as shown in figure 2. For example, most of USAID’s obligations were on contracts awarded prior to fiscal year 2008. In contrast, most of State’s active contracts were awarded during our period of review, but more than half the obligations were on a small portion of previously awarded contracts. DOD, State, and USAID reported that they used competitive procedures to award nearly all contracts awarded in our review period, with the exclusion of task and delivery orders. Generally, contracts should be awarded on the basis of full and open competition.The agencies reported that most of their new contracts were awarded using full and open competition, but in some cases the agencies reported a contract as competed without indicating whether full and open or limited competition occurred. The agencies reported that approximately 3 percent of contracts awarded during our period of review, accounting for 29 percent of the obligations, were not competed (see fig. 3). Most of the 1,143 contracts reported to us as not competed had relatively small obligations during our review period. Approximately 90 percent of them had obligations of less than $100,000 and 80 percent had obligations less than $25,000. In contrast, only 27 of the 1,143 contracts reported as not competed had over $1 million in obligations. These 27 contracts accounted for 99 percent of obligations for contracts that were not competed. The law authorizes agencies to use limited competition in certain situations. There may be circumstances under which full and open competition would be impracticable, such as when contracts need to be awarded quickly to respond to urgent and compelling needs or when there is only one source for the required product or service. In such cases, agencies may award contracts without providing for full and open competition (e.g., using limited competition or on a sole-source basis) if the proposed approach is appropriately justified, approved, and documented. Similarly, simplified acquisition procedures allow for limited competition when awarding certain contracts, and the use of these procedures is determined based on dollar thresholds contained in the Federal Acquisition Regulation (FAR). These dollar thresholds vary depending on where and for what purpose the contract was awarded and performed, its dollar value, and the contracting method used. Additionally, contracts valued below the micropurchase threshold, which is $25,000 for contracts awarded and performed outside the United States in support of contingency operations, may be awarded without soliciting competitive quotations if the authorized purchase official considers the price to be reasonable. To determine the circumstances in which the agencies awarded contracts using other than full and open competition, we reviewed 79 DOD and State contracts awarded in fiscal year 2008 that had more than $100,000 in obligations during our review period and were reported as not competed or for which no competition information was provided. During our review, we discovered that 8 of these had actually been awarded after a full and open competition and 14 had been awarded after a limited competition (i.e., they were not sole-source awards). Of the 71 files we reviewed that were not awarded under full and open competition, the most common justification for limiting competition or awarding a sole- source contract was that only one source could provide the good or service being acquired. In some of these cases, the incumbent contractor was awarded the new contract. For example, State awarded a sole-source contract for communication equipment in Iraq because only one company offered radios that were compatible with State’s existing communication network. The second most common reason for limiting competition was DOD’s enhanced authority to acquire products and services from Iraq and Afghanistan. Congress granted DOD this authority, which allows DOD to limit competition or provide preferences for products and services from Iraq or Afghanistan, to provide a stable source of jobs and employment in the two countries. According to DOD contracting officials in Iraq and Afghanistan, they are increasing their use of this authority. However, officials in Afghanistan explained that in doing so they generally have some level of competition among local firms as opposed to doing a sole- source award. They explained that limited competitions are being conducted to not only ensure better prices and products but also to help instill Western business practices and develop local business capacity. Competition requirements generally do not apply to the process of issuing task and delivery orders. However, where there were multiple awardees under the underlying contract, the FAR requires the contracting officer in most instances to provide each awardee a fair opportunity to be considered for each order exceeding $3,000. The agencies reported that 99 percent of the orders issued during our review period were competed. Congress has directed DOD, State, and USAID to track specific information regarding contractor personnel and contracts with performance in Iraq and Afghanistan. Such data are a starting point for providing decision makers with a clearer understanding of the extent to which they rely on contractors and for facilitating oversight to improve planning and better account for costs. Implementing SPOT, as agreed to in the MOU, has the potential of providing the agencies and Congress with data on contracts, contractor personnel, and those personnel who have been killed or wounded. However, the agencies’ implementation of SPOT currently falls short of that potential. Specifically, there is a lack of consistency as to which contractor personnel are entered into SPOT. Not withstanding the MOU, some agency officials have questioned the need or feasibility of entering detailed information on individual contractor personnel into SPOT beyond the requirement of the NDAA for FY2008 or the MOU. Furthermore, SPOT does not currently have the capability to accurately import contract data and its report generating capabilities limit the agencies’ access to information that has been entered, particularly with respect to killed or wounded contractor personnel. Until SPOT is fully implemented, the agencies will continue to rely on multiple alternative sources of data, which are also unreliable and incomplete, for information related to contractor personnel and contracts in Iraq and Afghanistan. As a result, the agencies and Congress will continue to be without reliable information on contracts and contractor personnel to help improve oversight and decision making at a critical juncture as agencies draw down their efforts in Iraq and expand them in Afghanistan. To ensure that the agencies and Congress have reliable information on contracts and contractor personnel in Iraq and Afghanistan, we recommend that the Secretaries of Defense and State and the USAID Administrator jointly develop and execute a plan with associated time frames for their continued implementation of the NDAA for FY2008 requirements, specifically ensuring that the agencies’ criteria for entering contracts and contractor personnel into SPOT are consistent with the NDAA for FY2008 and with the agencies’ respective information needs for overseeing contracts and contractor personnel; establishing uniform requirements on how contract numbers are to be entered into SPOT so that contract information can accurately be pulled from FPDS-NG as agreed to in the MOU; and revising SPOT’s reporting capabilities to ensure that they fulfill statutory requirements and agency information needs, such as those related to contractor personnel killed or wounded. In developing and executing this plan, the agencies may need to revisit their MOU to ensure consistency between the plan and what has previously been agreed to in the MOU. We requested comments on a draft of this report from DOD, State, and USAID. In its written comments, DOD did not agree with our recommendation that the agencies jointly develop and execute a plan for continued implementation of the NDAA for FY2008. According to DOD, the current MOU, existing regulations, and ongoing coordination among the agencies should be sufficient to meet legislative mandates. DOD noted that additional direction beyond the implementation of the MOU may require statutory action. DOD further explained that it is planning upgrades to SPOT that may address some of the issues we identified, particularly related to the entry of contract numbers and reporting features. State, in its written comments, also disagreed with the need for the agencies to develop and execute a plan to address the issues we identified. Nevertheless, State acknowledged that the agencies need to continue meeting to review their progress in complying with the NDAA for FY2008, revisit the MOU, address issues to ensure consistency in meeting the MOU criteria, and discuss SPOT’s future reporting capability. Similarly, while USAID’s written comments did not address our overarching recommendation for the agencies to develop and implement a plan or indicate whether it agreed with the specific issues to be included in their plan, it noted that it plans to continue regularly meeting with DOD and State officials concerning the NDAA for FY2008 and the existing MOU. We agree that coordination among the three agencies is critical, but given the findings in this report, coordination alone is not sufficient. Instead, the agencies need to take action to resolve the issues we identified in their implementation of SPOT. In their comments the agencies recognized the importance of having reliable information on contracts and contractor personnel and acknowledged that corrective measures are needed. However, the agencies did not explain in their comments how they plan to translate their coordination efforts and upgrades into actions to resolve the issues we identified. By jointly developing and executing a plan with time frames, the three agencies can identify the concrete steps they need to take and assess their progress in ensuring that the data in SPOT are sufficiently reliable to fulfill the requirements of the NDAA for FY2008 and their respective agency needs. Further, the extent to which the steps necessary to implement the MOU and the recommended plan are consistent with the NDAA for FY2008, no additional statutory action would be required. DOD’s, State’s and USAID’s comments, along with our supplemental responses, are reprinted in appendixes III, IV, and V, respectively. Additionally, we provided a draft of this report to Labor for its review and comment. Labor provided technical comments that we incorporated into the final report as appropriate. We are sending copies of this report to the Secretary of Defense, the Secretary of State, the Administrator of the U.S. Agency for International Development, the Secretary of Labor, and interested congressional committees. In addition, the report will be available at no charge on GAO's Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Section 863 of the National Defense Authorization Act for Fiscal Year 2008 directs GAO to review and report on matters relating to Department of Defense (DOD), Department of State, and U.S. Agency for International Development (USAID) contracts in Iraq and Afghanistan. In response to this mandate, we analyzed agency-reported data for fiscal year 2008 and the first half of fiscal year 2009 regarding (1) the status of the agencies’ implementation of the Synchronized Predeployment and Operational Tracker (SPOT) database, (2) the number of contractor personnel, including those performing security functions, working on DOD, State, and USAID contracts with performance in Iraq and Afghanistan, (3) the number of personnel killed or wounded, and (4) the number and value of contracts that were active and awarded during our period of review and the extent of competition for new contract awards. To address our first objective, we reviewed DOD, State, and USAID’s July 2008 MOU relating to contracting in Iraq and Afghanistan and interviewed DOD, State, and USAID officials responsible for implementing SPOT regarding the current and planned capabilities of the system. We also interviewed agency officials who use SPOT, including officials in Iraq and Afghanistan, to determine the criteria the agencies use to determine what information is entered into SPOT. We reviewed agency guidance and policy documents regarding the use of SPOT and took training courses designed for government and contractor personnel who expect to use the system. We then compared the information we collected on the use and capabilities of SPOT to the requirements identified in the agencies’ MOU to determine the extent to which SPOT fulfilled the terms of the MOU. To address our second objective, we obtained data from DOD, State, and USAID on the number of U.S. nationals, third-country nationals, and local nationals working on contracts with performance in Iraq or Afghanistan in fiscal year 2008 and/or the first half of fiscal year 2009. These data included individuals reported to be performing security functions. DOD reported data from the U.S. Central Command’s quarterly census and SPOT for both fiscal year 2008 and the first half of fiscal year 2009. Of the two sources, DOD officials said that the quarterly census was the most complete source of information on contractor personnel. Given that and the limitations we identified with SPOT, we used the quarterly census data to develop our DOD-related findings for this objective. State reported data gathered from periodic surveys of its contractors for fiscal year 2008. For the first half of fiscal year 2009, State reported contractor personnel information gathered from SPOT as well as through surveys. USAID reported data gathered from periodic surveys of its contractors for fiscal year 2008 and the first half of fiscal year 2009. USAID also reported SPOT data for some contracts with performance in Iraq for the first half of fiscal year 2009. We compared these data to the list of contracts we compiled to address our objective on the number and value of agency contracts. Furthermore, we interviewed agency officials regarding their methods for collecting data to determine the number of contractor personnel, including those providing security functions, in Iraq and Afghanistan. We also assessed the completeness of the SPOT data that we received from each agency by comparing them to data from other sources, such as the agency surveys. Based on our analyses and discussions with agency officials, we concluded that the agency reported data should not be used to draw conclusions about the actual number of contractor personnel in Iraq or Afghanistan for any given time period or trends in the number of contractor personnel over time. However, we are presenting the reported data along with their limitations as they establish a minimum number of contractor personnel during our period of review. To address our third objective, we analyzed USAID and State data on the number of contractor personnel killed or wounded in Iraq and Afghanistan during the period of our review. DOD did not collect and could not provide such data. USAID provided us with information on deaths and injuries it had compiled from its implementing partners, including contractors. Similarly, State provided data on contractors who were killed or wounded based on reports from its contractors, which were compiled by department personnel. Due to the lack of other available and reliable data sources, we could not independently verify whether USAID’s and State’s data were accurate. Nevertheless, we are providing them as they provide insight into the number of contractor personnel who were killed or wounded during our period of review. After informing us that they did not have a reliable system for tracking killed or wounded personnel, DOD officials referred us to use the Department of Labor’s data on Defense Base Act (DBA) cases. We analyzed data from Labor on DBA cases arising from incidents that occurred in Iraq and Afghanistan in fiscal year 2008 or the first half of fiscal year 2009. We obtained similar DBA data from Labor for our previous report, for which we determined that the data were sufficiently reliable, when presented with appropriate caveats, for providing insight into the number of contractor personnel killed or wounded. As a result, we did not reassess the reliability of the data we received for this report. We also selected a random two-stage cluster sample of 150 DBA case files from a population of 2,500 cases files submitted to Labor’s 10 district offices for incidents that occurred during fiscal year 2008 and resulted in the affected contractor losing time from work. Labor provided us with DBA case data on all incidents that occurred in fiscal year 2008 through February 26, 2009. Because there may be a lag between when an incident occurred and when Labor was notified, we limited our sample to cases arising from incidents that occurred in fiscal year 2008. As a result, the findings from our file review are generalizable only to fiscal year 2008 cases. Labor provided us with a second data set for fiscal year 2008 and the first half of fiscal year 2009 as of July 9, 2009, which included cases that were in the first data set. The second data set included an additional 367 cases resulting from incidents that occurred in fiscal year 2008 that were not in the population from which we drew our sample due to a lag in when Labor was notified of the incidents. Because these additional cases were within the scope of our review, we included them in the total number of DBA cases presented in objective three; however, these cases were not included in the population of cases from which we drew our random sample. The first stage of our sample selection was comprised of 5 clusters, selected randomly with replacements, which came from 4 of the 10 Labor district offices. In the second stage, we randomly selected 30 files from each cluster. Thus, our final sample consisted of 150 DBA case files. We reviewed these files to determine the circumstances of the incident resulting in the death or injury, whether the incident was the result of a hostile or nonhostile incident, and the severity of the contractor’s injury, where applicable. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that the confidence interval in this report will include the true value in the study population. To address our fourth objective, we obtained data from DOD, State, and USAID on the number of active and awarded contracts with performance in Iraq and Afghanistan during fiscal year 2008 and the first half of fiscal year 2009, the amount of funds obligated on those contracts during our review period, and the extent to which new contracts were competitively awarded. We also interviewed agency officials to discuss the reported contract data. The agencies provided data from the Federal Procurement Data System – Next Generation (FPDS-NG), agency specific-databases, and manually compiled lists of obligations and deobligations. We determined that the data each agency reported were sufficiently reliable to determine the minimum number of active and awarded contracts and obligation amounts, as well as the extent of competition, based on prior reliability assessments, interviews with agency officials, and verification of some reported data compared to information in contract files. We took steps to standardize the agency-reported data and removed duplicates and contracts that did not have obligations or deobligations during our review period. DOD provided us with 32 separate data sets, State provided 7, and USAID provided 9. The reported data included multiple numbering conventions for each agency. We reformatted each data set and combined them to create a single, uniform list of contracts, orders, and modifications for each agency. We excluded the base contracts under which task and delivery orders were issued. This was done, in part, because such contracts do not have obligations associated with them as the obligations are incurred with the issuance of each order. We also excluded grants, cooperative agreements, and other contract vehicles such as leases, sales contracts, and notices of intent to purchase as these instruments do not include performance by contractor personnel in Iraq or Afghanistan. For all contracts within our scope, we summed the reported obligations for each contract and order for fiscal year 2008 and the first half of fiscal year 2009. Some contracts had obligations in both fiscal year 2008 and the first half of fiscal year 2009, so the number of active contracts for the entire 18-month period was lower than the combined number of contracts that were active in each fiscal year. We reviewed contract files to identify the justification cited by the agencies for not awarding the contract using full and open competition for a subset of DOD and State contracts awarded in fiscal year 2008 that were reported as not competed and that had total obligations during our review period greater than $100,000. We did not review the files for all contracts that met our criteria, in part, due to the location of some of the files. For example, while we reviewed files located in Baghdad, Camp Victory, Kabul, and Bagram Air Base, we did not review files for contracts located in other areas of Iraq and Afghanistan. In total, we reviewed information on 68 DOD contracts and 11 State contracts. At the time of our contract file reviews, USAID had not reported any new contracts with obligations over $100,000 as not competed. After our file reviews were completed, USAID provided us with additional data, including data on two contracts with obligations over $100,000 that were not awarded competitively. Due to when we received these data, we did not review these two contracts. However, we reviewed 12 other USAID contracts to verify the contract information reported to us. We conducted this performance audit from November 2008 through September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 7 shows all DOD contracts, along with the associated obligations, reported to us as active in Iraq, Afghanistan, or both during fiscal year 2008 and the first half of fiscal year 2009. For last year’s review, DOD reported obligating $18,996 million on 37,559 contracts in fiscal year 2007. Table 8 provides information on the number of contracts awarded by DOD and associated obligations made during our review period. The majority of DOD’s active contracts were awarded during our review period and 70 percent of DOD’s obligations were made on the new contract awards. Table 9 shows competition information for the DOD contracts (excluding task and delivery orders) that were awarded during our review period. DOD reported that 97 percent of its contracts were competed, including 33,143 (93 percent) that were awarded using full and open competition. For 74 contracts, DOD either provided no competition information or what was provided was not sufficient to determine whether the contract was competed. As shown in table 10, most of the DOD contracts reported as awarded without competition had relatively small obligations during our review period. Table 11 shows all State contracts, along with the associated obligations, reported to us as active in Iraq, Afghanistan, or both during fiscal year 2008 and the first half of fiscal year 2009. For last year’s review, State reported obligating $1,550.4 million on 773 contracts in fiscal year 2007. Table 12 provides information on the number of contracts awarded by State and associated obligations made during our review period. The majority of State’s active contracts were awarded during our review period and 46 percent of State’s obligations were made on the new contract awards. Table 13 shows competition information for the State contracts (excluding task and delivery orders) that were awarded during our review period. State reported that 70 percent of its contracts were competed, including 358 (47 percent) that were awarded using full and open competition. For 10 contracts, State either provided no competition information or what was provided was not sufficient to determine whether the contract was competed. As shown in table 14, most of the State contracts reported as awarded without competition had relatively small obligations during our review period. Table 15 shows all USAID contracts, along with the associated obligations, reported to us as active in Iraq or Afghanistan during fiscal year 2008 and the first half of fiscal year 2009. For last year’s review, USAID reported obligating $1,194.8 million on 190 contracts in fiscal year 2007. Table 16 provides information on the number of contracts awarded and associated obligations made during our review period. The majority of USAID active contracts were awarded prior to our review period and obligations on these previously awarded contracts accounted for nearly 79 percent of USAID’s obligations during fiscal year 2008 and the first half of fiscal year 2009. Table 17 shows competition information for the USAID contracts (excluding task and delivery orders) that were awarded during our review period. USAID reported that 90 percent of its contracts were competed, including 126 (82 percent) that were awarded using full and open competition. For 3 contracts, USAID either provided no competition information or what was provided was not sufficient to determine whether the contract was competed. As shown in table 18, there were only 13 contracts that USAID reported as awarded without competition and none had obligations greater than $1 million during our review period. The following are GAO’s supplemental comments on the Department of Defense's letter dated September 28, 2009. 1. DOD cites the number of contractor personnel in SPOT for the entire CENTCOM area of responsibility, which extends beyond Iraq and Afghanistan. Consistent with our mandate, we report 117,301 DOD contractor personnel identified in SPOT as being in Iraq or Afghanistan as of March 31, 2009. However, we did not use SPOT as our primary data source for contractor personnel data. We found that the quarterly census was a more comprehensive source—containing approximately 84,000 personnel more than SPOT as of March 31, 2009 for Iraq and Afghanistan. 2. In signing the MOU, DOD agreed to track contractor personnel in Iraq or Afghanistan if their contract is more than 14 days or over $100,000. As described in its comments, however, DOD’s regulations contain different thresholds on which contractors should be entered into SPOT. In practice, we found that that the need for an LOA—rather than the thresholds in the MOU or DOD’s regulations—served as the primary determinate as to whether or not a contractor was entered in SPOT. These variations reinforce our finding and recommendation that the agencies ensure they have consistent criteria—both in policy and practice—on which contractor personnel are entered into SPOT. 3. DOD’s comments recognize the need to develop a standardized contract field in SPOT. However, any effort to create a standardized field needs to involve DOD, State, and USAID to ensure consistency with their contract numbering systems and a common understanding of how data must be entered into the system. Further, each agency must ensure that the way contract and task order numbers are entered into SPOT are identical with how those numbers are entered into FPDS-NG. 4. Our report recognizes that SPOT was upgraded in January 2009 to track contractor personnel who have been killed or wounded. As discussed in the report, however, this upgrade does not provide agencies with the capability to readily generate reports on the total number of contractor personnel killed or wounded within a given timeframe; instead, the current capability is limited to generating a report of personnel identified as killed or wounded on the day the report is generated. DOD does not specify in its comments as to whether or how the planned November 2009 upgrade would address this reporting limitation. Also, it is not clear from DOD’s comments as to whether this planned upgrade will apply to both the unclassified and classified versions of SPOT. State’s comments suggest that based on information it received from DOD, the improved reporting features will be limited to the classified version. Department of State Comments on GAO Draft Report CONTINGENCY CONTRACTING: DOD, State, and USAID Continue to Face Challenges in Tracking Contractor Personnel and Contracts in Iraq and (GAO-10-01, GAO Code 120790) The Department of State appreciates the opportunity to review the Government Accountability Office (GAO) draft report titled, “Contingency Contracting: DOD, State and USAID Continue to Face Challenges in Tracking Contractor Personnel and Contracts in Iraq and Afghanistan.” Recommendation: To ensure that the agencies and Congress have reliable information on contracts and contractor personnel in Iraq and Afghanistan, we recommend that the Secretaries of Defense and State and the Administrator of USAID jointly develop and execute a plan with associated timeframes for their continued implementation of the NDAA for FY2008 requirements, specifically Ensuring that the agencies’ criteria for entering contracts and contractor personnel into the Synchronized Predeployment and Operational Tracker (SPOT) are consistent with the National Defense Authorization Act (NDAA) for FY2008 and with the agencies’ respective information needs for overseeing contracts and contractor personnel; Establishing uniform requirements on how contract numbers are to be entered into SPOT so that contract information can accurately be pulled from FPDS-NG as agreed to in the MOU; and Revising SPOT’s reporting capabilities to ensure that they fulfill statutory requirements and agency information needs, such as those related to contractor personnel killed or wounded. In developing and executing this plan, the agencies may need to revisit their MOU to ensure consistency between the plan and what has previously been agreed to in the MOU. Response: The Bureau of Administration (A) has the lead on agency implementation of SPOT and the NDAA. We acknowledge the importance of reliable information on contracts and contractor personnel in Iraq and Afghanistan reported jointly with the Secretary of Defense (DOD) and the Administrator of USAID to implement NDAA FY2008 requirements. We agree that the agencies need to continue to meet to review progress and intent of the MOU to comply with NDAA FY2008, but do not agree with the recommendation that a new plan needs to be developed. We do agree that the current MOU needs to be revisited as well as some issues to ensure consistency meeting the criteria it already contains as specified in NDAA 2008, section 861. “regarding contractor personnel in Iraq, DOD, State, and USAID officials stated that the primary factor for deciding to enter contractor personnel into SPOT was whether a contractor needed a SPOT-generated letter of authorization (LOA). Contractor personnel need SPOT-generated LOAs to, among other things, enter Iraq, receive military identification cards, travel on U.S. military aircraft, or, for security contractors, receive approval to carry weapons. However, not all contractor personnel, particularly local nationals, in Iraq need LOAs and agency officials informed us that such personnel were not being entered into SPOT. In contrast, DOD officials informed us that individuals needing LOAs were entered into SPOT even if their contracts did not meet the MOU’s 14 day or $100,000 thresholds.” State personnel advised the GAO during an interview that company administrators were told verbally and in writing to enter all United States citizens, Third Country Nationals, and Locals Nationals into SPOT. Due to security concerns about entering data on Local Nationals, company administrators were given a blind identity scheme to aid with accountability of entering the information. We continue to urge that actual information be entered on all Local Nationals because SPOT would be used for NDAA 1248, repatriation requests. Also, the MOU signed by the three agencies stipulate that contracts under the simplified acquisitions threshold of $100,000 and 14 working days would not be entered into SPOT (Section VII B). The GAO was advised by State personnel during the interview that it lacked resources to enter every acquisition into SPOT and support the higher threshold. However, there may be confusion because an earlier Section II A of the MOU only states “longer than 14 days”. We agree that the three agencies need to discuss this issue to determine one standard. “while contract numbers are the unique identifiers that will be used to match records in SPOT to those in FPDS-NG, SPOT users are not required to enter contract numbers in a standardized manner. In our review of SPOT data, we identified that at least 12 percent of the contracts had invalid contract numbers and, therefore, could not be matched to records in FPDS-NG.” When implementing SPOT, State used the configuration guidance which complies with FPDS-NG given by DOD to enter all contract numbers. The user guide posted on the Department’s intranet was shared with GAO; the business rules in it state the configuration to be used when entering a contract number into SPOT. We contacted DOD on September 9, 2009, and they informed us they are already working on a standardized configuration. Recently, DOD conducted user acceptance testing for implementation of enhanced reporting in SPOT, but we were told it would only be on its secure network. However, all the information input to date into SPOT is in an unclassified network. The agencies need to discuss future reporting capability for non-classified SPOT. The following are GAO’s supplemental comments on the Department of State’s letter dated September 24, 2009. 1. Notwithstanding State’s guidance to contractors, we found that not all contractor personnel are being entered into SPOT as required. In practice, we found that the need for an LOA is the primary determinate for whether or not contractor personnel are entered into SPOT. For example, a State contracting officer informed us that Iraqis working on his contracts are not in SPOT because they do not need LOAs, which is not consistent with State’s guidance, the MOU criteria, or the NDAA for FY2008. 2. As reflected in our recommendation, we agree that the agencies need to determine a single standard on which contracts should be entered into SPOT. This is not only due to State’s observation regarding inconsistencies in the MOU, but also due to the inconsistencies we found between the MOU and NDAA for FY 2008 and the varying criteria being used by the agencies. Until there is a single agreed upon standard—both in guidance and practice, the agencies will continue track data differently and, as a result, the data for all three agencies will be incomplete. 3. Our finding pertained to how data are actually being entered into SPOT, which as we report allows users to enter invalid contract numbers and does not require the entry of task order numbers. For example, we found that none of State’s task orders in SPOT provided both the contract and task order numbers. If such data entry issues are not resolved in the near future, then the planned connection with FPDS-NG may present challenges and prevent contract data from being accurately imported into SPOT. The following are GAO’s supplemental comments on USAID’s letter dated September 22, 2009. 1. While building off the lessons learned in Iraq has merit, we note that USAID does not provide a time frame for when it will begin requiring contractors in Afghanistan to use SPOT to fulfill the requirements of the NDAA for FY2008 and what it agreed to in the MOU. 2. Our report explains that the need for the LOA—as opposed to what was agreed to in the MOU or contained in the NDAA for FY2008—has become the primary factor for determining which contractor personnel are entered into SPOT. USAID’s comment that it will explore SPOT’s functionality to track personnel who do not need LOAs is consistent with our recommendation that the agencies work together to ensure that the requirements of the NDAA for FY2008 and their respective information needs are fulfilled. 3. While USAID has a standard contract numbering system, the issue we identified pertains to how SPOT allows contract and task order numbers to be entered inconsistently. The agencies need to work together to ensure that contract and task orders numbers are entered into SPOT so that data can be accurately pulled from FPDS-NG. 4. While DOD is responsible for maintaining and upgrading SPOT, the three agencies have a shared responsibility to ensure that the database they agreed to use in their MOU fulfills the requirements of the NDAA for FY2008. Rather than deferring to DOD as the system owner to manage SPOT’s development, USAID should work with the other agencies to identify and agree on their information and reporting needs and ensure that the necessary upgrades are made to SPOT. John Hutton, (202) 512-4841 or [email protected]. In addition to the contact above, Johana R. Ayers, Assistant Director; Noah Bleicher; E. Brandon Booth; Justin Fisher; Art James, Jr.; Christopher Kunitz; Jean McSween; Alise Nacson; Jason Pogacnik; Karen Thornton; Gabriele Tonsil; and Robert Swierczek made key contributions to this report. | The Departments of Defense (DOD) and State and the U.S. Agency for International Development (USAID) have relied extensively on contractors to provide a range of services in Iraq and Afghanistan, but as GAO has previously reported, the agencies have faced challenges in obtaining sufficient information to plan and manage their use of contractors. As directed by the National Defense Authorization Act for Fiscal Year (FY) 2008, GAO analyzed DOD, State, and USAID data for Iraq and Afghanistan for FY 2008 and the first half of FY 2009 on the (1) status of agency efforts to track information on contracts and contractor personnel; (2) number of contractor personnel; (3) number of killed and wounded contractors; and (4) number and value of contracts and extent to which they were awarded competitively. GAO reviewed selected contracts and compared personnel data to other available sources to assess the reliability of agency-reported data. In response to a statutory requirement to increase contractor oversight, DOD, State, and USAID agreed to use the Synchronized Predeployment and Operational Tracker (SPOT) system to track information on contracts and contractor personnel in Iraq and Afghanistan. With the exception of USAID in Afghanistan, the agencies are in the process of implementing the system and require contractor personnel in both countries to be entered into SPOT. However, the agencies use differing criteria to decide which personnel are entered, resulting in some personnel not being entered into the system as required. Some agency officials also questioned the need to track detailed information on all contractor personnel, particularly local nationals. Further, SPOT currently lacks the capability to track all required data elements, such as contract dollar value and the number of personnel killed and wounded. As a result, the agencies rely on other sources for contract and contractor personnel information, such as periodic surveys of contractors. DOD, State, and USAID reported nearly 226,500 contractor personnel, including about 28,000 performing security functions, in Iraq and Afghanistan, as of the second quarter of FY 2009. However due to their limitations, the reported data should not be used to identify trends or draw conclusions about contractor personnel numbers. Specifically, we found that the data reported by the three agencies were incomplete. For example, in one quarterly contractor survey DOD did not include 26,000 personnel in Afghanistan, and USAID did not provide personnel data for a $91 million contract. The agencies depend on contractors to report personnel numbers and acknowledge that they cannot validate the reported information. USAID and State reported that 64 of their contractors had been killed and 159 wounded in Iraq and Afghanistan during our review period. DOD officials told us they continue to lack a system to reliably track killed or wounded contractor personnel and referred us to the Department of Labor's Defense Base Act (DBA) case data for this information. However, because DBA is a worker's compensation program, Labor's data include cases such as those resulting from occupational injuries and do not provide an appropriate basis for determining how many contractor personnel were killed or wounded while working on DOD, State, or USAID contracts in Iraq or Afghanistan. Nevertheless, the data provide insights into contractor casualties. According to Labor, 11,804 DBA cases were filed for contractors killed or injured in Iraq and Afghanistan during our review period, including 218 deaths. Based on our review of 150 randomly selected cases, we estimate that 11 percent of all FY 2008 DBA cases for the two countries resulted from hostile actions. DOD, State, and USAID reported obligating $38.6 billion on nearly 85,000 contracts in Iraq and Afghanistan during our review period. DOD accounted for more than 90 percent of the contracts and obligations. The agencies reported that 97 percent of the contracts awarded during our review period, accounting for nearly 71 percent of obligations, were competed. |
Health insurance helps children obtain health care. Children without health insurance are less likely to have routine doctor visits, seek care for injuries, and have a regular source of medical care. Their families are more likely to take them to a clinic or emergency room (ER) rather than a private physician or health maintenance organization (HMO). Children without health insurance are also less likely to be appropriately immunized—an important step in preventing childhood illnesses. During the 1980s, employment-based health insurance—the most common source of health coverage for Americans—decreased. By 1993, more than 39 million Americans lacked any type of health insurance. Almost one-quarter of these people were children, despite the relative affordability of providing insurance for children. Uninsured children are generally children of lower-income workers. Lower-income workers are less likely than higher-income workers to have health insurance for their families because they are less likely to work for a firm that offers insurance for their families. Even if such insurance is offered, it may be too costly for lower-income workers to purchase. In 1993, 61 percent of uninsured children were in families with at least one parent who worked full time for the entire year the child was uninsured. About 57 percent of uninsured children had family income at or below 150 percent of the federal poverty level. Recognizing the need to provide insurance for children, the federal government and the states expanded children’s eligibility for Medicaid, a jointly funded federal/state entitlement program. Beginning in 1986, the Congress passed a series of Medicaid-expansion laws that required states to provide coverage to certain children and pregnant women and gave states the option to expand eligibility further. Many states opted to use this approach instead of funding their own programs, because expanding Medicaid allowed them to get matching federal funds. As of April 1995, 37 states and the District of Columbia had expanded coverage for infants or children beyond federal requirements. In addition to these expansions, between 1991 and August 1995, five states implemented Medicaid demonstration waivers, some of which included coverage expansions to some uninsured children. Between 1989 and 1993, Medicaid expanded from covering 14 percent of U.S. children (8.9 million) to 20 percent (13.7 million). Nevertheless, many uninsured children remain ineligible for Medicaid. Beginning in 1985, states and private entities began to fund programs that provided insurance for children who were ineligible for or not enrolled in Medicaid and did not have private or comparable insurance coverage.The programs we visited varied in several respects, but all were limited in how many children they could cover by the size of their budgets, which depended on their funding sources. Every state had substantially more uninsured children than children enrolled in one of these programs. Almost all of these programs have had to restrict enrollment and develop waiting lists of children who could not enroll because of insufficient funding. To target their funding, most programs restricted enrollment to low-income, uninsured children not enrolled in Medicaid. In 1995, 31 states had either a publicly or privately funded program that provided health insurance coverage for children. (See app. I for a list of these states.) Fourteen states had publicly funded programs that provided insurance for children, which generally relied heavily on state funding. In 1994, these programs enrolled from 39 to 98,538 children and had budgets ranging from about $240,000 to about $71.5 million. In addition to state-level efforts, the private sector developed voluntary insurance programs supported through philanthropic funding. The best known of these are the Caring Programs, sponsored by 24 Blue Cross/Blue Shield organizations in 22 states. The Caring Programs, which served more than 41,000 children in 1994, ranged in size from 400 to almost 6,000 enrolled children and had budgets from $100,000 to $4.3 million. The four state- and two privately funded programs that we visited varied in enrollments and funding sources. They provided insurance coverage to between 5,532 and 104,248 children under set yearly budgets. Much of the state programs’ funding came from state general revenues, cigarette or tobacco taxes, or health care provider taxes; counties; and foundations and other private-sector entities. The private programs each received funding from Blue Cross/Blue Shield and from private individuals and organizations. The programs’ costs, covered services, and premium subsidies also varied. Moreover, four of the programs operated statewide, but Florida Healthy Kids and the Western Pennsylvania Caring Program for Children operated only in certain counties. (See table 1.) Unlike state Medicaid programs, which operate as open-ended federal/state entitlements, all the programs we reviewed operated within limited and fixed budgets. These budgets did not allow them to cover most of the uninsured children in their states. The private program budgets were limited by the amount that could be raised by corporate donors, such as Blue Cross/Blue Shield, and individual donors. The state-funded programs had larger budgets, but they, too, were limited by the amount of funding states were willing to devote to insuring children. All the states in which these programs operated had more uninsured children than children enrolled in the programs.For example, New York’s Child Health Plus Program represented a substantial investment for the state in children’s health coverage—$55 million—and it had the largest enrollment: 104,248. But in 1993, New York State had almost half a million uninsured children. Other programs could only cover a small fraction of their uninsured. For example, Alabama had 156,000 uninsured children in 1993, and its Caring Program covered 5,922 in 1995—only about 3 percent. MinnesotaCare had the highest ratio of enrolled children in 1995 to uninsured children in 1993: 44,689 to 76,517, or 58 percent. Lack of funding forced all the programs we visited (except Minnesota’s) to restrict enrollment at times and to relegate children who applied for the program to waiting lists. According to child advocates and officials of these programs, restricting enrollment and developing waiting lists undermine program credibility. In addition, Florida has been unable to start its Healthy Kids Program in many interested counties because the program has lacked funding. The programs we visited limited program eligibility to cover children most in need of insurance. Generally, they tried to cover low-income, uninsured children not enrolled in Medicaid in order not to duplicate existing public coverage. Four programs limited eligibility to families on the basis of their income, although each program’s income eligibility differed. All six were designed to complement Medicaid coverage for children, since none enrolled children who had Medicaid coverage and most tried to steer possibly eligible children to Medicaid first. Four programs required children to be uninsured, although two allowed children with limited and noncomparable coverage to enroll. (See fig. 1.) d 275% of FPL 185% of FPL 235% of FPL All eligible children in a family must be enrolled. Enrollment in other health insurance is allowed as long as the coverage is not equivalent to the coverage offered under Florida Healthy Kids Program or New York’s Child Health Plus Program. Children must also be enrolled in the National School Lunch Program. Federal poverty level. Children whose family incomes are between 150 and 275 percent of FPL cannot have had insurance for the 4 months before applying for MinnesotaCare and cannot have had access to employer-paid insurance for the 18 months before applying for MinnesotaCare. The maximum eligible age will increase by 1 year each year on October 1, until the maximum age of 17 is reached in 1996. Two programs—New York’s Child Health Plus and Florida’s Healthy Kids—covered uninsured children at any income level as long as their families paid the full premium costs. These two programs also extended coverage to insured children if their health insurance was not comparable to what the programs offered. In western Pennsylvania, state- and privately funded programs developed eligibility criteria to minimize duplication of coverage. The three children’s health insurance programs in western Pennsylvania—Medicaid, the state-funded Children’s Health Insurance Program, and the privately funded Western Pennsylvania Caring Program for Children—in combination provided coverage to children under 6 with family income at or below 235 percent of FPL and to children from 6 to 19 with family income below 185 percent of FPL. The Western Pennsylvania Caring Program for Children changed its eligibility criteria after the Children’s Health Insurance Program was developed to complement its coverage and provide coverage for children that it did not cover. (See fig. 2.) Health insurance costs for individuals were partially dependent on the costs of covered medical services, but other factors influenced costs as well. Some programs covered inpatient care and other expensive services, while others chose to limit or exclude expensive services. Moreover, the premium costs per child were similar in some of the programs that covered inpatient care and other expensive services and in some that limited such services or did not cover them. In addition to limiting services, state and private programs used other strategies to manage costs, such as sharing costs with patients and using competitive bidding and managed care. One factor that did not significantly increase costs as had been expected by program administrators was excessive use of health services. On the contrary, program children’s use of services was similar to that of privately insured children. The state and private programs’ benefit packages varied from providing only primary and preventive care and emergency and accident services to providing a comprehensive range of benefits, including inpatient services. Costs to provide coverage for children varied from $20 to $70.60 per month, partly because of the kinds of services covered and the limitations on those services. All programs provided a core set of services that program officials cited as most important for most children. These services included primary and preventive services—such as well-child visits, immunization, outpatient surgery, outpatient physician services, and diagnostic testing—and outpatient emergency services. In addition, most programs offered other benefits, such as mental health services, vision and hearing care, and prescription drugs. Three of the state- and privately financed programs also provided some dental services. Officials from several state and private programs noted that they would like to provide more benefits—such as dental care, which some cited as a critical preventive service—but did not want to increase the cost of their program. (See fig. 3.) The Alabama Caring Program for Children, which covered outpatient care only, provided the fewest services and was the least expensive per child—$20 per month. The other programs reported average per-child costs ranging from $46.50 to $70.60 per month, and some provided more benefits than others. Florida ($46.50) and Minnesota ($53) covered many services, including inpatient and outpatient treatment, prescription drugs, and physical therapy. Minnesota also covered dental care and inpatient and outpatient substance abuse treatment. In contrast, New York’s Child Health Plus Program ($54.71), Pennsylvania’s Children’s Health Insurance Program ($62.60), and the Western Pennsylvania Caring Program ($70.60) were more expensive, yet they provided either limited or no inpatient care. The programs that did not provide inpatient services or provided only limited inpatient services often relied on Medicaid to meet enrolled children’s needs. According to officials from two of these programs, the families of children who needed hospitalization could qualify for Medicaid services through medically needy spenddown provisions because of the cost of the care. Under spenddown, the cost of expensive services, such as hospitalization, is deducted from family income to determine the child’s Medicaid eligibility. Pennsylvania’s Children’s Health Insurance Program was planned to shift the costs of inpatient care to Medicaid when possible. The program covers 3 days of hospitalization, after which families are required to apply for Medicaid. For families whose children cannot qualify for Medicaid through spenddown, the Children’s Health Insurance Program covers up to 90 inpatient days per year. In addition to limiting benefits, most programs added some patient cost-sharing provisions. However, they generally kept premiums and copayments minimal, especially for families in the lowest income ranges. None of the programs required deductibles. Because most children enrolled were from the lowest income brackets, families did not generally have to contribute much for their children’s care. (See table 2.) Cost-sharing provisions varied by program. Family premium payments priced on a sliding scale based on family income as a percent of FPL were required by the four state-funded programs. Copayments for some services were required by three of the four state-funded programs and the two Caring programs. However, two state-funded programs did not require families in the lowest income range to pay any portion of the premium. All of the state-funded programs expected families with income that exceeded the lowest income level to pay a portion of their child’s premium. However, the size of the premium varied for families with similar incomes. For example, a family with an income at 200 percent of FPL would pay at least $43 per month in Florida to enroll one child and at least $39.75 in Pennsylvania, but only $2.08 in New York. Some advocates expressed concern that premium contributions were too high for lower-income families in Pennsylvania and Minnesota and that high premium contributions discouraged these families from enrolling their children. Florida’s Healthy Kids executive director also commented that the price of insurance affects enrollment. When the Healthy Kids premium dropped below $50, she reported, the number of enrollees who paid the full premium increased. Two state-funded programs, those in Florida and New York, covered children at any income level as long as families with income over a specified level paid the full premium cost. Although this approach enabled those programs to help any uninsured child, relatively few children were enrolled when families had to pay all of the premium. New York targeted its outreach to lower-income families, which might explain why so few full-premium children enrolled, according to one New York program official. Florida marketed its program to all children attending public schools and still had low enrollment of children paying the full premium. Most programs did not require program participants to contribute a copayment for most services. When programs did require copayments, they were generally $10 or less and applied to those services listed in table 2, such as prescription drugs and vision care. None of the programs allowed providers to charge copayments for primary and preventive services, except Alabama’s Caring Program for Children, which asked, but could not require, physicians to waive the copayments they normally charged Blue Cross/Blue Shield patients. Most of the state and privately funded programs we visited were increasing their use of managed care, which is a strategy widely followed by private companies to constrain health care costs. Many of the programs enrolled some of their children in HMOs, and most were trying to increase their use of HMOs. In addition, three of the state-funded programs paid insurers fixed, lump-sum payments to cover needed health services, which placed risk with the insurer rather than the program. Of these three state-funded programs, two used competitive bidding to choose their insurers. (See fig. 4.) Minnesota paid Medicaid-certified providers on a fee-for-service basis, but the program plans to transition to managed care in 1996. The other programs covered children using private provider networks, HMOs, gatekeeper/case managers, or some combination of these. Alabama’s Caring Program for Children was the only program using network providers exclusively, and Florida’s program was the only one using HMOs exclusively. Pennsylvania’s Children’s Health Insurance Program and the Western Pennsylvania Caring Program for Children enrolled children in HMOs whenever available. At least 80 percent of the enrollees in New York’s Child Health Plus Program, Pennsylvania’s Children’s Health Insurance Program, and the Western Pennsylvania Caring Program for Children were in HMOs. These three programs expect to increase their use of HMOs for program children’s care. All of the state programs except Minnesota paid insurers a fixed, per child, per month payment, which shifted risk from the public payers to the insurers. The insurers or managed care organizations were then responsible for providing or contracting for all covered health services. Florida and Pennsylvania used a competitive process to select insurers and set rates; New York had a selection process that was not competitive. In Florida, Healthy Kids contracted with one HMO organization selected through competitive bidding for each county or group of counties. In Pennsylvania’s Children’s Health Insurance Program, if more than one insurer bid, the contract was awarded to the lowest qualified bidder, but other qualified bidders could provide services in the same area at the lowest bidder’s price. In New York, all insurers who met specified program qualifications during the selection process were permitted to participate, but the state had approval rights over the premiums they charged. The insurers and developers of most of the programs we visited had expected that children enrolling would be less healthy than children with private insurance and would, therefore, use services more frequently. In addition, since all the programs covered preexisting health conditions, the programs were expected to attract families with ill children who could not get other insurance coverage. Programs like Pennsylvania’s Children’s Health Insurance Program and the Florida Healthy Kids Program negotiated prices for their premiums assuming that the programs would attract children who would be more costly to serve than privately insured children. In addition, Alabama required families to enroll all of their eligible children in its program, which kept families from enrolling only sick children and assisted in health promotion. However, according to managers from all the programs, the children served were not significantly sicker and did not use services more than privately insured children. New York’s Child Health Plus Program officials found through a survey that most of the children enrolled in the program did so after they lost private insurance coverage. Alabama and Florida reported a slight increase in the use of services due to initial demand, but that soon stabilized. The lower-than-anticipated use of services led to cost-savings for Pennsylvania’s Children’s Health Insurance Program and the Florida Healthy Kids Program: a rebate of $1.3 million for the Pennsylvania program and a 21-percent decrease in premiums for Healthy Kids in Volusia County. All the programs were designed to facilitate implementation and provider and patient participation. Most state- and privately funded programs relied on private insurers or nonprofits for many administrative functions and used their physician networks. The programs used existing billing systems and generally had reimbursement levels that approximated market rates—factors that were attractive to providers. The programs guaranteed access to a provider network, used simple enrollment procedures, and in many ways appeared similar to private insurance, which helped the programs avoid the stigma of welfare. Families surveyed by their programs were satisfied with the programs and with the health care their children received. To some degree, all the programs we visited used administrative systems already in place when designing and implementing their programs. While Minnesota employed state Medicaid structures for administrative functions, the other programs employed nonprofit corporations or private insurers to perform key administrative functions. (For more detail on specific programs, see app. II.) For example, in both the state-funded New York Child Health Plus and the Pennsylvania Children’s Health Insurance programs, the state agencies exercised general program oversight, but most administrative functions were performed by private insurance plans under contract to the state. In addition to assuming responsibility for paying providers (and assuming risk for the costs involved), the insurers processed applications and determined eligibility. Each enrollee signed up with one of the insurers, which used its existing network of providers or HMOs to serve program patients. The nonprofit Florida Healthy Kids Corporation (FHKC) managed the Healthy Kids program using schools, HMOs, and contractors to provide some administrative services. It contracted with an HMO in each county to provide program services to enrollees in that county and with other entities to provide application processing, eligibility determination, premium billing and collection, technical assistance, and program evaluation. The schools also provided some administrative services, such as distributing enrollment applications and forwarding computerized data for eligibility determination. All the programs we visited used existing billing systems and provider networks, generally through private or nonprofit insurers. Contracting with an existing network of providers facilitated program implementation, since enrollees could be served as soon as a contract was signed. New York’s Child Health Plus Program required contracted insurers to have an existing network of providers in place to enable them to reach program children in every part of the state. Program officials said that this requirement made the program “just another line of business” for the insurers. In addition, physicians did not have to adapt to new, or to significantly change existing, operating processes to serve program children, which increased their willingness to participate in these programs. In Alabama, for example, all Caring Program providers filed claims electronically through Blue Cross/Blue Shield’s existing claims system. According to a program official, most providers will accept lower payment rates and new patients if their routine billing and payment processes are not disrupted. One Alabama physician noted that quick reimbursement and lack of “red tape” contributed to her willingness to serve program children. Similarly, two providers cited use of Minnesota’s Medical Assistance program billing structure as contributing to MinnesotaCare’s success, because the physicians did not have to adjust to new operating procedures and hospitals could more easily participate in the program. State and private programs have developed various methods for ensuring provider and insurer participation. Most of the programs chose to reimburse providers at close to market rates to ensure provider participation. In addition, some of the programs required physicians to accept the set rates as a condition of caring for other, more lucrative patients. The state and private programs we visited, with the exception of MinnesotaCare, chose a strategy other than Medicaid for paying providers’ rates. Because many state Medicaid programs have paid below market rates for services, these programs have had difficulty maintaining an adequate provider network. Some studies have indicated that Medicaid patients have more difficulty accessing health care than non-Medicaid patients. MinnesotaCare paid Medicaid rates, and some providers complained that MinnesotaCare’s reimbursement was about 50 to 60 percent of their normal billing rates. The other programs paid premiums intended to approximate market rates. The Alabama and Western Pennsylvania Caring Programs reimbursed physicians according to the rate schedules used by their respective Blue Cross/Blue Shield organizations, although for Alabama the rates paid for treating program participants were some of Blue Cross/Blue Shield’s lowest. As an additional incentive, physicians in several programs were required to treat program patients if they wished to treat other, sometimes more lucrative patients. For example, physicians participating in the Blue Cross/Blue Shield provider networks of Alabama and Western Pennsylvania could not refuse to serve Caring Program patients unless they withdrew from the Blue Cross/Blue Shield provider network entirely. Similarly, insurers in New York’s Child Health Plus Program required their participating physicians to treat program patients as a condition of treating the insurers’ private patients. And in Minnesota, physicians could not participate in the more lucrative state and local government employee health benefit program unless they also participated in the state’s health assistance programs, which included MinnesotaCare. All six programs gave children access to a network of providers. In two programs, more than one insurer covered some parts of the state, so families had a choice of networks. Patients in Minnesota’s program had access to providers that participated in Medicaid. Through state mandate, MinnesotaCare is ensured a large network; in 1995, the program had 24,000 primary care providers for 48,000 enrolled children. Patients enrolled in all the programs except Minnesota’s had guaranteed access to at least one and sometimes two established provider networks or HMOs through private insurers. New York’s Child Health Plus Program had 15 insurers that together covered the entire state. A few areas in the state were covered by more than one insurer, and patients were allowed to select between insurers. Three of the four regions covered by Pennsylvania’s Children’s Health Insurance Program were served by at least two insurers, which increased families’ choices. The Alabama Caring Program for Children used the existing Blue Cross/Blue Shield network, which covered most physicians in the state. The Western Pennsylvania Caring Program for Children used either the Blue Cross/Blue Shield HMO or the Blue Cross/Blue Shield network, which included more than 12,000 physicians. Florida’s Healthy Kids Program enrollees were limited to providers in a single HMO per county, but the program required that children be no more than a 20-minute car ride from a provider, except in the most rural areas. The enrollment procedures for the state- and privately funded programs were relatively simple. Some programs were flexible about documenting information for eligibility and proceeding on the basis of trust. Simplified enrollment procedures and flexible eligibility documentation requirements minimized enrollment barriers and thus encouraged program participation. All the state and private programs used a simple mail-in enrollment form (often one page long) and did not require face-to-face interviews. In addition, New York’s Child Health Plus Program directed applicants to program insurers who provided telephone assistance in completing the forms. MinnesotaCare, which also used a mail-in application, asked follow-up questions by phone. Florida’s Healthy Kids Program allowed parents to obtain and submit one-page applications through the schools. Some programs were more flexible than others about documenting enrollment information, such as income. For example, the Alabama Caring Program for Children allowed an “honor system,” on the theory that applicants were truthful about their incomes, and New York’s Child Health Plus Program allowed a self-declaration of income if applicants were unable to produce any other verification. Pennsylvania’s two programs required applicants to verify income, and the Florida Healthy Kids Program relied on the National School Lunch Program to verify applicants’ income. Most of the programs did not apply resource tests, which also simplified eligibility determination. Minnesota and Alabama staff reported finding that families generally reported information honestly and accurately when applying for their programs. Program officials generally agreed that for families to use the program, they must not feel stigmatized, a problem that often exists with welfare recipients. Program staff stressed that it was important to preserve the families’ dignity at all times. To avoid the stigma of welfare, the state and private programs tried to resemble private insurance as much as possible. In addition to generally using private insurers’ networks and simplified administrative processes that did not require face-to-face interviews at welfare offices for eligibility determinations, the programs used other strategies to preserve families’ dignity. Some of these were modest. For example, all programs using private insurers issued enrollees insurance membership cards that were similar to cards issued for the insurers’ commercial programs. Families generally reported being very satisfied in the five programs that assessed patient satisfaction. For example, 97 percent of respondents in a 1993-94 survey of Florida’s Healthy Kids families were either “very satisfied” or “satisfied” with the care provided for their children. More families with children in the Healthy Kids Program were satisfied with their care than families of children in any of the four comparison groups—Medicaid, private insurance, other insurance, or uninsured. A separate study of Healthy Kids families found that higher percentages of program families than of nonprogram families were “very satisfied” with the benefits available to their children, their doctor’s availability, waiting times in the doctor’s office, and the amount they had to pay at the time of an office visit. As another example, a 1989 survey of participants in Minnesota’s Children’s Health Plan, predecessor to MinnesotaCare, found that over 80 percent rated the program either a 9 or a 10 on a scale of 1 to 10, with 10 being excellent. All the programs we visited sought to reduce unmet medical needs and to encourage the appropriate use of primary and preventive care services. Several of the programs have begun to evaluate whether their programs are achieving these goals. Although some programs are finding that access and appropriate use of medical services have increased, several have found that use of preventive services is still below desired levels. Program staff have increased their efforts to educate parents about the importance of preventive care for children. (See app. III.) If enacted, legislation to change the Medicaid program to a block grant would give states greater flexibility to redesign their Medicaid programs, but it would also limit federal funding. To accommodate these changes, states would need to make difficult choices when structuring their Medicaid programs. While the programs we visited differed from Medicaid, they exemplified the choices states and private-sector organizations have made when using their own resources to provide health coverage to uninsured children. Most notably, the state- and privately funded programs we visited covered some children who would not otherwise have been covered; complemented existing Medicaid coverage; kept per child costs to a minimum; provided preventive and primary care services—the services children are most likely to need; offered a wide network of providers; required families to share part of the cost; used HMOs frequently to manage children’s health care; and used existing administrative systems of state, nonprofit, and private organizations. Despite these state and private efforts, many children remained uninsured. In addition, eligible children sometimes had to wait to enroll. Further, programs did not always cover services routinely available to children insured through private insurance or Medicaid. In the future, the responsibility for ensuring health care coverage for children may fall more directly on the states; their local communities, including private-sector providers and nonprofit organizations; and children’s families. The programs we visited appear to have succeeded in bringing together these groups and individuals to expand children’s access to health care. Their program experience could prove instructive for other states and the Congress. Although this report does not focus on agency activities, we discussed its contents with responsible officials at HCFA, who had no comments. We also discussed its contents with officials in the programs we visited. Program officials’ comments were generally limited to specific technical corrections, which we incorporated in this report. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to interested parties and make copies available to others on request. Please contact me at (202) 512-6806 if you or your staff have any questions. This report was prepared under the direction of Sally Jaggar, Mark Nadel, and Rose Marie Martinez by Sheila Avruch and others. Other staff who contributed to this report are named in appendix IV. Forty-five states and the District of Columbia have expanded Medicaid beyond federal requirements to cover infants or children, have implemented a Medicaid waiver, or have developed privately funded or state programs to insure children. Some states have more than one type of program to add coverage for children. (See fig. I.1.) Cil (-;VesRng Pog) (B ) Cil (-;VesRng Pog) (Figure notes on next page) State funded programs include those classified by the National Governors’ Association as both public and public/private, except Rhode Island and Tennessee, which have Medicaid 1115 waivers. The primary source of funding for most of these programs was state financing. The 1115 Medicaid waiver transferred children up to age 21 and pregnant women from MinnesotaCare to Medicaid. The other adult MinnesotaCare participants remained in the state-funded MinnesotaCare program. Medicaid funds the children’s services for Washington’s Basic Health Plan. State- and privately funded programs vary in size: some cover fewer than 100 children, and others cover up to 99,000. (See table I.1.) Enrollment Budget (in millions) (continued) Enrollment Budget (in millions) Missouri (2 programs) Pennsylvania (western) Pennsylvania (southeastern) This appendix provides programmatic and administrative details about the six programs we visited, presented alphabetically by state. Each description includes a background section, which highlights the history of the program and its a section on program structure and operations, which includes information on administration; funding; eligibility; enrollment; covered services and costs; insurer payment and provider networks used; and publicity, outreach, and marketing. The Alabama Caring Program for Children is a statewide, privately funded program that was created in 1988 by Blue Cross/Blue Shield of Alabama. It provides primary care services to enrolled children using Blue Cross/Blue Shield providers. Enrollees are children from low-income, working families who do not have insurance through an employer, yet whose income is not low enough to qualify them for Medicaid. The nonprofit Alabama Child Caring Foundation administers the program, including determining eligibility, enrolling children, publicizing the program, collecting donations, and fundraising. Blue Cross/Blue Shield staff process claims and pay providers. All administrative services, including Foundation staff salaries, are donated by Blue Cross/Blue Shield. The program does not underwrite insurance—instead it contracts with Blue Cross/Blue Shield for claims and payment services. The program also uses Blue Cross/Blue Shield’s provider network to deliver health care services. The Caring Program is funded entirely through the philanthropic donations of businesses, churches, foundations, civic/service organizations, and individuals. Blue Cross/Blue Shield matches all contributions dollar for dollar. The program’s budget for 1994 was approximately $1.7 million, and the estimated budget for 1995 is approximately $2 million. Children may enroll if they (1) are under age 19, unmarried, and have an annual family income under $9,500; (2) are full-time students (unless they are under school age or have completed grade 12); (3) are Alabama residents, and (4) are ineligible for Medicaid or other insurance. Additionally, families must enroll all of their eligible children in the program. Foundation staff refer children potentially eligible for Medicaid to the Medicaid bureau before they can enroll in the Caring Program. As of July 31, 1995, 5,922 children were enrolled in the program. Since the available funding was not sufficient to provide coverage for all eligible applicants, 1,766 eligible children were on a waiting list. The Foundation generally responds to an application within one day of receipt, but the average waiting time for enrollment is 18 to 24 months, because the program does not have the funding to enroll children as soon as they are determined eligible. While the program is statewide, donors can designate their funding for particular counties, so children in some counties spend less time waiting to enroll. The Alabama Child Caring Foundation determines eligibility and enrolls children using a simple, one-page, 12-question form that can be mailed to the Foundation. There are no income verification requirements. Once a child is admitted, the Foundation staff send the child a benefits handbook, a Blue Cross/Blue Shield identification card, and a list of participating providers. The program covers primary and preventive outpatient services (well-child visits, immunizations, outpatient physician services, outpatient surgery, and diagnostic tests). It also covers emergency and accident care. It does not cover prescription drugs or inpatient, vision, hearing, or dental care (except in one county, which has a specially funded pilot program for dental care). Program officials told us that benefits have not been expanded further because funding is limited and many children are currently waiting to join the program. The program does not have any pre-existing condition exclusions. The average monthly cost per child is $20, which does not include any administrative expenses. Program participants pay no premiums or deductibles. They may be required to pay a $5 copayment for some outpatient services. However, program and Blue Cross/Blue Shield officials have asked providers to waive the copayment in order not to discourage program participants from obtaining care, and most providers have complied. The Foundation pays for covered services provided by physicians in the Blue Cross/Blue Shield of Alabama provider network, to which most physicians in the state belong, according to the program’s executive director. Physicians cannot refuse to treat Alabama Caring Program patients without dropping out of the network. Providers are reimbursed based on the existing Blue Cross/Blue Shield fee-for-service rate schedule, and claims are processed through the Blue Cross/Blue Shield billing system. Claims are paid within 5 days. Program officials use a variety of methods for publicizing the program, including public service announcements on TV and radio, free advertising in newspapers, distribution of brochures and flyers, and contacts with providers, advocacy organizations, churches, schools, and corporate donors. Blue Cross/Blue Shield developed radio and TV public service announcements using some of Alabama’s college football coaches. These announcements have been very successful at publicizing the program. Since the program is currently wait-listing applicants because it lacks the resources to enroll them, it has focused the past year’s efforts principally on fund-raising rather than on outreach. The Florida Healthy Kids Program is a school enrollment-based program created through the July 1990 Healthy Kids Corporation Act. Its goal is to provide every child access to quality health care by uniting children with accessible, local, comprehensive health care providers. The program was initially funded in Volusia County as a HCFA demonstration project that also received state, county, and private funds. It is currently available in seven Florida counties, with 13 other counties waiting to join. Uninsured children of any income level attending school in a participating county can join, but only children with family income at or below 185 percent of FPL will have their premiums partially subsidized. The nonprofit Florida Healthy Kids Corporation (FHKC) has overall administrative responsibility for the program. FHKC contracts with others for processing applications, determining eligibility, billing and collecting premiums, providing technical assistance, and evaluating the program. County schools and their boards help inform parents about the program, disseminate enrollment applications, and provide monthly computerized data, which are used for eligibility redeterminations. In addition to the board that oversees FHKC, each county has its own board to direct Healthy Kids activities. The program was initially funded by federal and state funding, including a HCFA demonstration grant, which ended in February 1995, and family premium payments. The program is currently funded by state general revenue funds, a county ad valorem tax for children’s services, other county funds, health district tax funds, county school board funds, and premium payments by parents. The program’s total funding for 1994 was approximately $8.8 million, and the amount budgeted for 1995 was $13.1 million. A child may enroll in Healthy Kids if he or she is (1) 5 through 19 years of age or a 3- to 4-year-old sibling of an enrollee, (2) actively attending school, (3) uninsured, and (4) not enrolled in the Medicaid program. In some counties, children must prove they are not eligible for Medicaid before being enrolled in Healthy Kids; in others, not being enrolled in Medicaid is enough. Children must participate in the National School Lunch Program to get their premiums subsidized, since income eligibility for subsidized premiums is determined through the School Lunch Program eligibility process. FHKC redetermines eligibility monthly by having a contractor compare computerized records for the Healthy Kids, School Lunch, and Medicaid programs. The Healthy Kids Program provides school enrollment-based health insurance. The children obtain health coverage in the form of group insurance policies provided through the school districts, rather than through employers. By using school districts, the program can tap into existing communication systems with parents to market the program and enroll children. School officials distribute and collect applications during the open enrollment period at the beginning of the school year. Additional enrollment periods are available to children who transfer to other schools. An FHKC contractor determines eligibility and then FHKC sends a list of eligible children to the responsible HMO. The HMO sends the family new member information and a membership card and requests that the family select a primary care provider. If the family does not respond within 90 days, the HMO will send a follow-up letter and call the family to encourage them to use well-child services. As of July 1, 1995, 6,602 children were enrolled in Volusia County, and 15,254 children were enrolled in the Healthy Kids Program statewide. Statewide, 87 percent of enrolled children had family income at or below 185 percent FPL and thus had their premiums partially or fully subsidized; in Volusia County, 93 percent were in that category. The Volusia County program offers a wide range of services including primary and preventive outpatient services (well-child visits, immunizations, outpatient physician services, outpatient surgery, and diagnostic tests), emergency and accident care, hospitalization, and related inpatient physician services. It also includes physical, speech, and occupational therapy (limited to 15 inpatient days per contract year and 24 outpatient treatment sessions within a 60-day period per episode of illness or injury). In addition, it covers prescription drugs, vision care (corrective lenses limited to one pair every 2 years unless prescription or head size changes), hearing care, home health care, ambulance services, durable medical equipment and prosthetic devices, family planning, chiropractic care (limited to six visits in 6 months), and podiatric care (limited to 2 visits per month). Mental health services are included, but limited to 15 inpatient days per contract year and 20 outpatient visits per contract year, with a lifetime maximum expenditure of $20,000. Substance abuse services are provided for pregnant teens only. Healthy Kids covers newborn care, skilled nursing facility services limited to 100 days per contract year, and transplant services. Covered services and limitations may vary by county. For example, dental care is available in some counties. There are no preexisting condition exclusions. The average cost per month to provide health services to children in Volusia County was $46.50, which reflects the total premium payment to the HMO. Program officials estimated that, in addition, the program averages administrative costs of about $1.50 per month per child. The amount that parents pay toward premiums differs by county and income category. Since September 1, 1995, all counties have required parents to pay some share of their children’s premium. Before that time, Volusia County did not require poor parents to pay any share of their children’s premiums, while other counties required parents to pay $5 per month for children in the lowest income group (at or below 130 percent of FPL.) Starting in September 1995 in Volusia County, families with income at or below 100 percent of FPL paid $15 per child per month; those with income between 101 and 130 percent of FPL paid $20; those with income between 131 and 185 percent of FPL paid $27; and those with income above 185 percent of FPL paid $48. Families pay no deductibles. Some services require a small copayment. Prescription drugs and optometrist refractions both have $3 copayments, mental health outpatient visits have a $5 copayment, and prescription eyeglass lenses and nonauthorized ER visits have $10 copayments. The program pays a capitated monthly fee to HMOs to cover enrolled children’s health care services. To choose HMOs, FHKC sends out requests for proposals and then contracts with one HMO per county. The HMOs are required to provide hospitalization and specialist services as needed and ensure that children live no more than 20 minutes by car from a provider except in the most rural areas. To meet these requirements, the HMO in Volusia County contracted with some private doctors in the western, more rural, part of the county. The HMO contracts with FHKC allow FHKC to pay for services that patients need and then bill the HMO for the services if the HMO is not providing adequate service. FHKC monitors waiting time and patient complaints to measure access. The Florida Healthy Kids Program uses numerous methods to reach its target audience, including paid and public service radio and television ads, brochures and flyers, a video, and presentations by FHKC and HMO staff. In 1994, FHKC spent less than $5,000 on advertising. However, most advertising is donated by HMOs, school districts, county boards, and others. For example, in Volusia County, one large fast-food restaurant used tray liners publicizing the program. According to program officials, the school district and the county board have been creative and effective at developing advertising strategies tailored to their community. The program has specifically targeted teens, and African-American and migrant children, who were not joining the program at expected rates. The program used high school coaches and shop teachers to speak for the program and school dial-up message systems and direct mail to reach teens’ parents. The program has also worked through churches to reach African Americans and through migrant crew chiefs to reach migrant families. MinnesotaCare was established by the state legislature in 1992 to both expand and replace Minnesota’s Children’s Health Plan. The Children’s Health Plan, the first statewide, state-funded program providing health insurance coverage for uninsured children of low-income families, began in July 1988. When MinnesotaCare was implemented in October 1992, it broadened the Children’s Health Plan’s eligibility criteria by allowing parents and siblings living in the same households as qualifying children to enroll, and it subsequently expanded to include adults without children. In April 1995, Minnesota received HCFA approval of a Section 1115 Medicaid waiver that integrated a segment of the MinnesotaCare population—children and pregnant women—into the state’s Medicaid program effective July 1, 1995. The waiver allows the state to receive federal Medicaid funding for children and pregnant women in MinnesotaCare, but leaves children and pregnant women subject to MinnesotaCare rules regarding eligibility, enrollment, and cost-sharing. The state hopes to develop uniform eligibility, enrollment, and other criteria for MinnesotaCare, Medicaid, and the state’s General Assistance Medical Care program. The program is administered by the state’s Department of Human Services, the same agency that administers the state’s Medicaid program. However, MinnesotaCare has a separate office within that agency, as well as its own director and 89 dedicated staff. Other Department of Human Services staff perform some duties related to the program as well. The portion of MinnesotaCare that covers children and pregnant women is financed by a combination of federal Medicaid funding, state funds, and enrollee premium contributions. The remainder of MinnesotaCare is financed entirely by state funds and enrollee premiums, which is how the whole program was financed before the waiver. The state finances its share of the program through a 2-percent provider tax. Previous state funding sources have included the state’s general funds and a 1-percent cigarette tax. Program costs (including administrative costs and program expenditures) were $36.6 million in fiscal year 1994, and are budgeted at $93.9 million for fiscal year 1995. Children under age 21, as well as their parents and dependent siblings if they reside in the same household, may enroll in MinnesotaCare if (1) their family income does not exceed 275 percent of the FPL, (2) they are permanent Minnesota residents, (3) they have had no other health insurance for the preceding 4 months, and (4) they could not get employer-subsidized insurance for the preceding 18 months. (The last two requirements do not apply to children in families with incomes that do not exceed 150 percent of FPL.) Single adults and families without children may enroll if their household incomes do not exceed 125 percent of FPL and they satisfy the other requirements. If funding is available, the state may increase the upper income limit for single adults and families without children to 135 percent of FPL in October 1995 and to 150 percent of FPL in October 1996. The current eligibility criteria for MinnesotaCare are much broader than the criteria used when the program that preceded it, the Children’s Health Program, was first implemented in 1988. At that time, the only eligible group was children aged 1 through 8 with family incomes up to 185 percent of FPL. As of July 1995, MinnesotaCare had 88,123 enrollees. Of this total, 50.7 percent were children, 43.2 percent were adults in households with children, and 6.1 percent were adults in households without children. MinnesotaCare has a mail-in enrollment and recertification process. A staff of 44 eligibility representatives reviews the applications and follows up with applicants by telephone or mail on an as-needed basis to verify residency status, income, and availability of other health insurance. Since May 1994, enrollees have received a generic identification card that the state uses for all its state-supported health care programs. MinnesotaCare provides children with a comprehensive benefit package that includes primary and preventive outpatient care (including well-child visits, immunizations, diagnostic testing, outpatient physician services, and outpatient surgery); emergency and accident care; physical, occupational, and speech therapy; prescription drugs; inpatient hospital and psychiatric care; mental health and chemical dependency services; vision, hearing, and dental care; home health care; durable medical equipment and prosthetic devices; podiatry; chiropractic services; family planning; case management; Christian Science sanitoriums; daycare/school examinations; day treatment; hospice care; intermediate care facilities for the mentally retarded; nurse anesthetists, private duty nursing, nursing facility services, and in-home nursing services; orthodontia; personal care; public health clinic visits; speech, hearing, and language disorders treatment; and medical transportation. Adults receive a similar benefit package, but a few services (such as nonpreventive dental care) are not covered, and some others are subject to service limitations or copayments. The benefit package has expanded considerably since the inception of the Children’s Health Program in 1988, when only children’s outpatient services were covered. The average monthly cost per child for MinnesotaCare is $53, excluding administrative costs. Enrollees pay a monthly premium that is determined using a sliding scale linked to income level and household size and that ranges from 1.5 to 8.8 percent of gross family income. However, a reduced premium of $48 per year is payable for children in families with incomes that do not exceed 150 percent of FPL. According to state officials, about two-thirds of all children in the program fall into this category. The program has no deductibles or copayments for children’s services, but adults are required to contribute copayments for certain services. MinnesotaCare gives enrollees access to the same providers who participate in the state’s Medicaid program. It pays providers on a fee-for-service basis using the Medicaid fee structure. Minnesota requires providers to either take part in the state’s health assistance programs or forgo participating in the more lucrative state and local government employee health benefit programs. Under the terms of the Medicaid waiver, most of the children in Medicaid, including those transferred into Medicaid from MinnesotaCare, will eventually be enrolled in HMOs. Program participants who live in areas where there is a scarcity of HMOs will continue to be served by fee-for-service providers. The state plans to award the first HMO contracts in 1996. Program officials use a variety of methods to publicize the program, including paid radio advertisements, radio and television public service announcements, listings in community-based service agency publications, and brochures and flyers. Officials also rely on contacts with hospitals, doctors’ offices, advocacy organizations, public clinics, and schools, among others. Child Health Plus is a statewide, state-funded program created by the New York State legislature in 1990 to insure children. The program’s goals are to provide low-income children with comprehensive outpatient health care services, increase children’s access to primary and preventive health care, and improve participating children’s health status. The program is open to children at all income levels, but only children in families with gross income below 222 percent of FPL receive a subsidy. The New York State Department of Health has overall responsibility for administering the program, while the State Insurance Department approves participating insurance companies’ premiums and reviews subscriber contracts. The State Insurance Department works with the Department of Health to define “equivalent health insurance” to determine which children can join the program. The program uses private insurers to perform many administrative functions, including processing applications, determining eligibility, collecting premiums, paying providers, engaging in marketing and outreach, and monitoring quality assurance. The program also contracts out marketing and outreach activities to two nonprofit organizations. The program is funded by enrollee premiums and New York’s Bad Debt and Charity Care pool, which is raised by an assessment on hospitals. The amount appropriated to Child Health Plus limits the number of children who may be enrolled. Child Health Plus received $55 million from the Bad Debt and Charity Care pool during 1994 and has been budgeted $76.5 million for 1995. Children may enroll in Child Health Plus if they (1) are under age 15 and born on or after June 1, 1980; (2) do not have “equivalent insurance”;(3) are New York State residents (even if they are not legally in the United States); and (4) are not enrolled in Medicaid. The maximum eligible age has been increased since the start of the program from under 13 to under 15. Child Health Plus is targeted primarily at low-income children. According to a survey of program applicants completed in 1993, most children joined after losing private insurance coverage or Medicaid. Their average family income was $16,000. As of July 1995, the program had 104,248 enrollees, of whom 99.6 percent were subsidized (under 222 percent of FPL). Insurers process enrollment using a simple one-page application, which can be submitted by mail, without a face-to-face interview. The program is flexible about the documents needed to prove eligibility. For example, income can be proved by means of employer attestations or, as a last resort, a self-declaration form. Under the “presumptive eligibility” procedure, families lacking needed documentation but whose children appear eligible can have their children covered for up to 60 days while they complete the application process. Unlike Medicaid, which counts net income and also uses a resource test, the Child Health Plus Program counts gross income and omits a resource test, which expedites eligibility determination. The program covers primary and preventive care (including well-child care, in accordance with American Academy of Pediatrics guidelines, immunizations, outpatient treatment of illness and injury, diagnostic tests, and outpatient surgery); emergency care; prescription drugs; outpatient treatment for alcoholism and substance abuse; short-term physical and occupational therapy; radiation therapy; chemotherapy; and dialysis. It does not cover inpatient care, including inpatient mental health care; dental care (except when necessary to treat a medical condition); or speech therapy. In 1994, the average monthly per patient cost was $54.71. The monthly per child premiums paid to insurers ranged from $36 to $66.50, reflecting geographical and other differences among the insurers. In addition, the Department of Health incurred $0.80 per patient per month in administrative costs. Most children’s families pay little for coverage and services. Families with gross incomes below 160 percent of FPL pay no premium (almost 87 percent of enrolled children). Families with incomes between 160 and 222 percent of FPL pay $25 per child per year up to a maximum of $100 for the entire family (13 percent of enrolled children) toward the premium cost. Families with incomes above 222 percent of FPL pay the entire premium (0.4 percent of enrolled children). Families pay no deductibles. Families may have copayments of $35 for inappropriate ER use (or may have these claims denied) and may have copayments of up to $3 for each pharmacy prescription, depending on the insurer. The program pays a capitated monthly fee per child to insurers to cover enrolled children’s health care services. The participating 15 nonprofit insurers joined by submitting bids in response to a 1990 request for proposal. To join the program, insurers had to have an existing network of providers in place, with a sufficient number of board-certified physicians. Child Health Plus Program enrollees are given access to the same physicians as plan members with private insurance. The 15 insurers together cover the entire state. Children must enroll with an insurer responsible for the area in which they live. Certain areas fall within more than one insurer’s service area, so enrollees residing in those areas have a choice of insurer. Of the 15 insurers, 12 are managed care plans and 3 are indemnity plans. As of December 1993, 80 percent of the enrollees were enrolled in managed care plans. The state contracted with two nonprofit organizations to provide marketing and outreach services to the program; insurers also provide such services. Both the contractors and insurers work through community-based organizations that serve low-income populations, such as churches, clinics, and schools. Both make presentations and distribute brochures and posters. One of the contractors also arranges “enrollment events” and operates a hotline for New York City. The other contractor operates a statewide hotline for the program. In addition, the Department of Health supports outreach. For example, the staff worked with the State Education Department to send an informational letter about the Child Health Plus Program to every school district superintendent. The Department of Health also provides a toll-free referral hotline that is used to publicize the program and refer callers to participating insurers. Pennsylvania’s Children’s Health Insurance Program is a statewide, state-funded program established by the Children’s Health Care Act of 1992 to provide free or subsidized health care coverage to uninsured, nonMedicaid-eligible children. It is modeled on Western Pennsylvania’s Caring Program for Children. The Children’s Health Insurance Management Team has overall responsibility for the program. They prepare budgets, execute contracts with insurers, approve rates, and coordinate enrollment outreach activities. They contract with insurers or their designates to handle many other administrative functions, including processing applications, determining eligibility, collecting premiums, paying providers, and engaging in outreach. The state funds the Children’s Health Insurance Program through a 2-cent per pack cigarette tax, and through parental premium contributions. Some insurers pay the parents’ portion of the premium. The state expended approximately $9.4 million on the program during the fiscal year July 1993 through June 1994, and approximately $28 million is budgeted for fiscal year 1995. Children may enroll in the Children’s Health Insurance Program if they are (1) under age 6 with family income at or below 235 percent of FPL, or age 6 through 15 with family income at or below 185 percent of FPL; (2) Pennsylvania residents for at least 30 days (except for newborns); and (3) not eligible for Medicaid or other insurance. Children who might be eligible for Medicaid must apply to Medicaid before they can be enrolled in the Children’s Health Insurance Program. Children’s Health Insurance Program participants are annually reassessed for eligibility. If during annual eligibility reassessment an enrolled child appears Medicaid-eligible, the Children’s Health Insurance Program will continue to cover the child for up to 60 days while the Medicaid bureau determines the child’s eligibility. As of July 1, 1995, 49,634 children were enrolled in the Children’s Health Insurance Program. Ninety-seven percent of these children have income at or below 185 percent of FPL and have their premium fully subsidized. In May 1995, approximately 1,504 children were on waiting lists across the state. Children on the waiting lists may participate in the program by paying an at-cost premium. Insurers determine eligibility and process enrollment, which can be completed entirely by mail. Enrollment procedures vary somewhat among insurers. Families who pay part of the premium must remit the first payment before enrollment. The Children’s Health Insurance Program covers primary and preventive health care (including well-child visits, immunizations, diagnostic testing, outpatient physician services, and outpatient surgery); emergency and accident care; physical, speech, and occupational therapy; vision care (limited to one pair of corrective lenses every 6 months and one pair of frames every 12 months); and hearing, dental, home health,and prescription drug services. An enrolled child who cannot qualify for benefits under the Medical Assistance spenddown provisions is eligible for a maximum of 90 days of inpatient services for each calendar year, which includes inpatient mental health services. Also covered are transplant services, ultrasound and nuclear medicine, and allergy testing. The original benefit package was established legislatively, but some benefits have been added, such as inpatient and outpatient mental health services in 1994. The program does not exclude coverage for any preexisting condition. The average monthly cost per child is approximately $63, which includes both premiums and some administrative costs. The program limits insurers’ administrative cost reimbursement to no more than 7.5 percent of submitted invoices. Premium rates vary by insurer and region—from $57.77 to $64.25 per child per month for fully subsidized children, and from $67.30 to $83.52 per child per month for partially subsidized children. Most children’s families pay nothing for coverage, and the remainder are partially subsidized. Families with income at or below 185 percent of FPL (97 percent of enrolled children in July 1995) do not pay any share of their children’s premiums. The state pays half the premium for children with family income between 185 percent and 235 percent of FPL. Some insurers subsidize the remaining half; otherwise parents must pay that share of the premium. The program requires no deductibles, and the only copayment ($5) is for prescription drugs. The Children’s Health Insurance Program pays a capitated monthly fee per child to insurers to cover enrolled children’s health care services. Insurers were selected through a competitive bid process, but the nonprofit insurers were legislatively required to bid. Insurers who did not have the lowest winning bid for a premium rate could participate if they were willing to match that rate. Currently, four nonprofit insurers and one for-profit insurer give families a choice of insurers in three of the four regions. About 80 percent of program children are enrolled in HMOs, and the rest are in preferred provider networks. Children are automatically enrolled in HMOs, where available. If a county changes to HMO service, children may remain in the preferred provider network plan until their recertification, but then they are automatically transferred to the HMO. Each of the insurers is responsible for program publicity and outreach. The insurers use a variety of publicity and outreach approaches, including paid radio, TV, and newspaper ads; distribution of brochures and flyers; and contacts with hospitals, doctors, advocacy organizations, churches, and schools. Insurers must develop an outreach plan and are required to contribute at least 2.5 percent of the total amount they bill the program as an in-kind outreach contribution. The Department of Health and the Insurance Department also conduct some outreach. The Western Pennsylvania Caring Program for Children was the nation’s first private-sector initiative to provide primary care health coverage to uninsured, low-income children who could not qualify for Medicaid. In 1984, a group of Presbyterian ministers from a local Pittsburgh church became concerned that many children were losing employment-based health care coverage as the local steel mills closed. The group approached Blue Cross of Western Pennsylvania and Pennsylvania Blue Shield, which agreed to help provide health coverage for the children. Together they developed the Caring Program, which enrolled its first child in June 1985. The Caring Program has changed its eligibility standards and its benefits since then to complement changes in Medicaid eligibility and the introduction of the Pennsylvania Children’s Health Insurance Program, a state-financed children’s health insurance program that was partially modeled on the Caring Program and introduced in 1993. The Western Pennsylvania Caring Foundation, Inc., a nonprofit organization set up by Blue Cross/Blue Shield, administers the Caring Program. The Foundation conducts enrollment and eligibility determination, care coordination for children with special health care needs, and outreach. Blue Cross/Blue Shield provides claims processing, retrieval, and legal services to the program. The Caring Program is financed by tax-deductible donations made by local foundations, religious organizations, civic groups, labor unions, corporations, schools, and individuals. Community contributions provide significant financial support for the program—for example, for fiscal year 1994, Pittsburgh area communities donated $870,000. However, the Caring Program’s major donor is Blue Cross/Blue Shield, which donates $2 for every $1 contributed by other donors as well as all administrative costs, including Foundation staffs’ salaries. In fiscal year 1994, Blue Cross/Blue Shield contributed about $4 million. When the program began, it enrolled uninsured children not eligible for Medicaid from birth to 19 years of age with total family income no greater than 100 percent of FPL. But as more public coverage options became available for some of the younger children through Medicaid and the Children’s Health Insurance Program, the Caring Program changed its eligibility rules to provide services to older children and to complement rather than compete with the Children’s Health Insurance Program and Medicaid coverage. Children may enroll in the Caring Program if they (1) are age 16 to 19 with total family income no greater than 185 percent of FPL, (2) are attending school, (3) have resided in Pennsylvania for the past 30 days, and (4) are uninsured and ineligible for Medicaid. Applicants who appear eligible for but are not receiving Medicaid must apply for Medicaid and be denied before being enrolled in the Caring Program. On average, the Caring Foundation refers about 300 to 400 applicants each month to Medicaid. The Foundation recertifies eligibility annually on the family’s enrollment date. At that time, if a child appears to have become Medicaid-eligible, the Caring Program provides temporary coverage while the child’s Medicaid eligibility is determined. As of July 1, 1995, 5,532 children were enrolled in the program. To enroll, families fill out a simple, one-page application, which is processed by the Foundation. If approved, the family is mailed an acceptance letter and a provider directory. Children covered by a fee-for-service system get active coverage as of the beginning of the month, but those covered by an HMO must choose a provider to activate their coverage. All participants receive an enrollment card that is practically identical to that used by any other Blue Cross/Blue Shield plan member. Enrollees must enroll in HMOs if Blue Cross/Blue Shield is operating an HMO in their county. The Caring Program’s initial benefit package was very limited, including only doctor office visits, immunizations, diagnostic testing, emergency care, and outpatient surgery. The program developers would have preferred to provide a more comprehensive package at that time, but since the program’s funding depended entirely on charitable donations, limiting benefits allowed the program to serve more children. When the Children’s Health Insurance Program began, the Caring Program wanted to provide the same set of covered services. In 1993 and again in 1994, the Caring Program expanded its services, adding dental, hearing, and vision care; prescription drugs; limited hospitalization; and mental health services. As under the Children’s Health Insurance Program, families of children who are hospitalized must apply for Medicaid coverage after 3 days. For families who do not qualify for Medicaid, the Caring Program will pay for up to 90 days per year. Currently, the program covers primary and preventive health care (including well-child visits, immunizations, diagnostic testing, outpatient physician services, and outpatient surgery); emergency and accident care; physical, speech, and occupational therapy; vision care (limited to one pair of corrective lenses every 6 months and one pair of frames every 12 months); hearing, dental, and home health care; and prescription drug services. An enrolled child who cannot qualify for benefits under the Medical Assistance spenddown provisions is eligible for a maximum of 90 days of inpatient services for each calendar year, which includes inpatient mental health services. Also covered are transplant services, ultrasound and nuclear medicine, and allergy testing. The program does not exclude coverage for any preexisting medical conditions. The average cost for services is now $70.60 per child per month. This does not include any administrative expenses, which are donated by Blue Cross/Blue Shield. In 1985, the more limited benefit package cost about $13 per child. The program requires little cost-sharing from families: Families do not have to pay any share of their children’s premiums. The program requires no deductibles, but does have a $5 copayment for prescription drugs. The Foundation pays providers through Blue Cross/Blue Shield. Children are enrolled in HMOs in 16 counties and in indemnity plans in 13 counties. HMOs are paid on a capitated basis, while network and other physicians are paid on a fee-for-service basis. Children may go to doctors outside the Blue Cross/Blue Shield network, but if they do the families are responsible for any charges beyond the rate Blue Cross/Blue Shield would normally pay for services. Few children use doctors who are not Blue Cross/Blue Shield providers. The Caring Program publicizes itself in various ways, from Mister Rogers television spots to bus billboards to grassroots efforts in every county. In addition, its fundraising efforts help make it known to churches and other community groups who can help outreach to families. Three outreach specialists work in 29 counties to locate sponsors for the children, make presentations, distribute applications, and help families enroll. In 1993, more than 100 schools and several major corporations helped the program raise funds and publicize its services. Currently, several chain stores have distributed flyers and hung posters to inform shoppers about the Caring Program. In addition, members of the Pittsburgh Steelers football team have made speeches to community groups, donated prizes to fundraisers, and hosted kick-off luncheons and victory parties as incentives for schools raising funds for the program. The Western Pennsylvania Caring Foundation spent about $370,000 in fiscal year 1994 for outreach and publicity for both the Caring Program and the Children’s Health Insurance Program. Limited available evaluations show that the six programs we visited have improved children’s access to and use of health care. The programs increased the likelihood that children would get the care that they needed, reduced inappropriate ER use in some cases, and increased children’s use of preventive services. Some evaluations suggest children enrolled in the programs may still not be getting as many preventive services as recommended by health authorities. Three programs’ evaluations have found evidence that they have reduced their enrollees’ level of unmet need for medical treatment. A 1991 survey of participants in the Western Pennsylvania Caring Program for Children found that, before enrolling their children in the program, 33 percent of parents postponed taking them to a physician when they thought it was necessary, but after enrollment only 2 percent did so. HCFA’s evaluation of the Florida Healthy Kids Program found that more enrolled children had their medical needs met than did uninsured children in those states. A separate study of the Florida program found that only 1 percent of Healthy Kids respondents, compared with 17 percent of non-Healthy Kids respondents, failed to seek medical care for their children because the cost of a doctor’s visit deterred them. Low-income families sometimes use hospital ERs when treatment by a primary care provider would be more appropriate and less costly. The programs’ effect on ER use was mixed: ER use by program children declined in one program, but not in two others. In fact, the two programs that used copayments to discourage inappropriate ER use had different results. Evaluation of the Healthy Kids Program found that participating children were significantly less likely to use the ER than a comparison group of nonparticipants. In addition, a hospital used by program children in Volusia County studied its ER usage and found that uninsured pediatric ER visits declined by about 15 percent during the 2-year period after the program began, without an increase in ER visits by children enrolled in the HMO used by the program. Florida uses a copayment to discourage inappropriate ER use. However, ER use by program families in the Western Pennsylvania Caring Program and New York’s Child Health Plus Program did not decrease. According to a 1991 University of Pittsburgh survey, participants in the Western Pennsylvania Caring Program used the ER slightly more often following enrollment in the program. Preliminary results of the University of Rochester’s survey of participants in a limited geographic area in New York’s Child Health Plus Program showed no significant changes in ER use following enrollment in the program, even though Child Health Plus authorizes a copayment for inappropriate ER use. Statewide data are not yet available to fully evaluate the program’s impact on ER use. A number of programs have been successful at encouraging use of primary and preventive care services. However, evidence from three programs suggests that preventive services may still be underused by program participants. Four programs found that enrolled children were more likely than uninsured children to get preventive and primary care. For example, a 1991 survey of participants in the Western Pennsylvania Caring Program found that the likelihood of a child’s having had at least one well-child visit during the year and being up-to-date with immunizations increased after the child enrolled in the program. Evaluations of the Florida program found that enrolled children were more likely to have had a doctor’s visit or a preventive checkup in the previous 3 months than a comparison group of uninsured children. An Alabama Caring Program evaluation found that 81 percent of the enrolled children had developed an ongoing relationship with a pediatrician or family doctor, whereas before enrolling in the program only 17 percent of these children had ever visited a private doctor. Despite these increases in children’s use of primary and preventive care, some children may still not be using preventive services at recommended levels. Several programs evaluated their enrolled children’s care use and found that many children were not using their insurance to get an initial checkup or to get immunized. For example, the Institute for Child Health Policy in Florida analyzed Healthy Kids children’s use of health care services. They found that 32 percent of program children studied had never had a doctor’s examination, and that the poorest enrolled children and African-American and Hispanic enrollees were more likely to have never used program services—results similar to those found from evaluating another health program serving a similar population. The Institute for Child Health Policy concluded that various sociopolitical and cultural factors may discourage African-American and Hispanic families from getting preventive services for their children. New York and Minnesota also found that children were not using preventive services sufficiently. An early quality assurance study of New York’s Child Health Plus Program found that enrolled children averaged 2.5 immunizations by their first birthday, even though American Academy of Pediatrics guidelines call for a minimum of 8 immunizations. The University of Minnesota found in a study of Minnesota’s Children’s Health Program (the precursor to MinnesotaCare) that more than 30 percent of enrolled children did not receive well-child care in 1990. According to several program officials or analysts, many families thought they were supposed to use their children’s coverage only when their children were sick. Program officials or insurers in Florida, Minnesota, and New York attempted to increase the use of preventive care through sending newsletters and other written materials to families. In addition to those named above, the following individuals made important contributions to this report: Cassandra Gudaitis and Marie Cushing led the team in Los Angeles and, with Jay Goldberg, drafted major sections of this report and helped conduct case studies in Alabama, Florida, Maine, New York, and Pennsylvania; Tim Fairbanks, Shawnalynn Smith, and Howard Cott helped conduct case studies in Minnesota and Maine; Richard Jensen and Michael Gutowski advised the team, with assistance from Deborah Perry of the National Governors’ Association; Karen Sloan helped write and revise the draft report; Susan Lawes assisted in developing the case study design and protocols; and Paula Bonin analyzed the March 1994 Current Population Survey for information on uninsured children in states. Medicaid and Children’s Insurance (GAO/HEHS-96-50R, Oct. 20, 1995). Health Insurance for Children: Many Remain Uninsured Despite Medicaid Expansion (GAO/HEHS-95-175, July 19, 1995.) Medicaid: Spending Pressures Drive States Toward Program Reinvention (GAO/HEHS-95-122, Apr. 4, 1995). Medicaid: Restructuring Approaches Leave Many Questions (GAO/HEHS-95-103, Apr. 4, 1995). Medicaid: Experience With State Waivers to Promote Cost Control and Access Care (GAO/HEHS-95-115, Mar. 23, 1995). Uninsured and Children on Medicaid (GAO/HEHS-95-83R, Feb. 14, 1995). Block Grants: Characteristics, Experience, and Lessons Learned (GAO/HEHS-95-74, Feb. 9, 1995.) Health Care Reform: Potential Difficulties in Determining Eligibility for Low-Income People (GAO/HEHS-94-176, July 11, 1994). Medicaid Prenatal Care: States Improve Access and Enhance Services, but Face New Challenges (GAO/HEHS-94-152BR, May 10, 1994). Managed Health Care: Effect on Employers’ Costs Difficult to Measure (GAO/HRD-94-3, Oct. 19, 1993). Employer-Based Health Insurance: High Costs, Wide Variation Threaten System (GAO/HRD-92-125, Sept. 22, 1992). Access to Health Insurance: State Efforts to Assist Small Businesses (GAO/HRD-92-90, May 14, 1992). Mother-Only Families: Low Earnings Will Keep Many Children in Poverty (GAO/HRD-91-62, Apr. 2, 1991). Health Insurance Coverage: A Profile of the Uninsured in Selected States (GAO/HRD-91-31FS, Feb. 8, 1991). Health Insurance: An Overview of the Working Uninsured (GAO/HRD-89-45, Feb. 24, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed state and private efforts to insure children who are not eligible for Medicaid and whose parents cannot purchase private insurance, focusing on: (1) enrollment, costs, funding sources, and annual budgets of these state and private programs; (2) state strategies to manage costs while providing children access to health care; and (3) program design elements that have facilitated program implementation. GAO found that: (1) by 1995, 14 states and at least 24 private-sector entities had programs to increase health care access for uninsured children; (2) the number of children enrolled in the state programs reviewed ranged from 5,000 to over 100,000 children and state budgets ranged from $1.7 million to $55 million; (3) private-sector programs enrolled up to 6,000 children and had budgets of $100,000 to $4.3 million; (4) state program funding sources included state general revenues, donations, and small insurance premiums and copayments; (5) budget limitations have reduced the number of Medicaid-eligible children served and have forced these programs to cap enrollment and place eligible children on waiting lists; (6) the programs' per-child costs ranged from $20 to $70.60 per-month; (7) state programs have attempted to reduce costs by limiting eligibility and covered services, relying on Medicaid to provide inpatient care, and using patient cost-sharing, managed care, and competitive bidding among insurers; (8) state efforts to attract providers included using insurers' existing payment systems and physician networks and paying near-market reimbursement rates, while their efforts to attract families included guaranteeing patient access to providers, having simple enrollment procedures, and avoiding the appearance of a welfare program; and (9) surveys showed that families were generally satisfied with state insurance programs, since the programs increased childrens' access to appropriate health care services. |
The Coast Guard is an Armed Service of the United States and the only military organization within the Department of Homeland Security (DHS). It is the principle federal agency responsible for maritime safety, security, and environmental stewardship through multi-mission resources, authorities, and capabilities. To accomplish its responsibilities, the Coast Guard is organized into two major commands that are responsible for overall mission execution—one in the Pacific area and the other in the Atlantic area. These commands are divided into 9 districts, which in turn are organized into 35 sectors that unify command and control of field units and resources, such as multimission stations and patrol boats. In its fiscal year 2009 posture statement, the Coast Guard reported having nearly 49,100 full-time positions—about 42,000 military and 7,100 civilians. In addition, the agency reported that it has about 8,100 reservists who support the national military strategy or provide additional operational support and surge capacity during times of emergency, such as natural disasters. Finally, the Coast Guard reported that it utilizes the services of about 29,000 volunteer auxiliary personnel who conduct a wide array of activities, ranging from search and rescue to boating safety education. The Coast Guard has responsibilities that fall under two broad missions— homeland security and non-homeland security. The Coast Guard responsibilities are further divided into 11 programs, as shown in table 1. For each of these 11 mission-programs, the Coast Guard has developed performance measures to communicate agency performance and provide information for the budgeting process to Congress, other policymakers, and taxpayers. The Coast Guard’s performance measures are published in various documents, including the Coast Guard’s Posture Statement, which includes the fiscal year 2009 Budget-in-Brief. The Coast Guard’s 2009 Budget-in-Brief reports performance information to assess the effectiveness of the agency’s performance as well as a summary of the agency’s most recent budget request. The performance information provides performance measures for each of the Coast Guard’s mission- programs, as well as descriptions of the measures and explanations of performance results. To carry out these missions, the Coast Guard has a program underway— called the Deepwater program—to acquire a number of assets such as vessels, aircraft, and command, control, communications, computer, intelligence surveillance, and reconnaissance systems. Appendix I provides additional details on specific vessels and aircraft. The Coast Guard began the Deepwater program in the mid-1990s and it is the largest acquisition program in the agency’s history. Rather than using a traditional acquisition approach of replacing individual classes of legacy vessels and aircraft through a series of individual acquisitions, the Coast Guard chose a system-of-systems strategy, that would replace the legacy assets with a single, integrated package. To carry out this acquisition, the Coast Guard decided to use a systems integrator—a private sector contractor responsible for designing, constructing, deploying, supporting, and integrating the various assets to meet projected Deepwater operational requirements at the lowest possible costs, either directly or through subcontractors. In June 2002, the Coast Guard awarded the Deepwater systems integrator contract to Integrated Coast Guard Systems (ICGS)—a business entity led and jointly owned by Lockheed Martin and Northrup Grumman Ship Systems. For 10 years, we have reviewed the Deepwater program and have informed Congress, the Departments of Transportation and Homeland Security, and the Coast Guard of the risks and uncertainties inherent in such a large acquisition. The Coast Guard’s fiscal year 2009 budget is about 6.9 percent higher than its 2008 enacted levels. Major increases in this year’s budget are attributable to operating expenses for the funding of additional marine inspectors and new command and control capabilities. Major increases in this year’s budget are also attributed to acquisition, construction and improvements for continued enhancement and replacement of aging vessels, aircraft, and infrastructure. The Coast Guard expects to meet 6 of 11 performance targets for fiscal year 2007, the same level of performance as fiscal year 2006. The Coast Guard’s budget request in fiscal year 2009 is $9.35 billion, or 6.9 percent more than the enacted fiscal year 2008 budget (see fig. 1). About $6.2 billion, or approximately 66 percent, is for operating expenses. This operating expense funding supports 11 statutorily identified mission- programs and increases in salaries, infrastructure and maintenance costs. This also includes increased funding for additional marine inspectors, new and existing command and control and intelligence capabilities, and to address rulemaking projects. The greatest change from the previous year is in the AC&I request, which at $1.2 billion reflects about a 35 percent increase from fiscal year 2008. This increase includes funding for such things as Deepwater program enhancements to the Coast Guard’s operational fleet of vessels and aircraft, and for continued development of new assets, as well as emergency maintenance. The remaining part of the overall budget request consists primarily of retiree pay and health care fund contributions. If the Coast Guard’s total budget request is granted, overall funding will have increased by over 37 percent (or 17 percent after inflation) since fiscal year 2003. Looking back further, overall funding will have increased by approximately 143 percent (or 87 percent after inflation) since fiscal year 1997. Overall, the Coast Guard’s budget request for homeland security missions represents approximately 40 percent of the overall budget, with the non- homeland security funding representing approximately 60 percent. However, the Coast Guard does not request funding by mission; it does so by appropriation account. Nonetheless, the Coast Guard provides a comparison of homeland security versus non-homeland security funding as part of the President’s fiscal year budget request. According to the Coast Guard, an activity-based cost model is used to estimate homeland security versus non-homeland security funding for its missions. This is done by averaging past expenditures to forecast future spending, and these amounts are revised from the estimates reported previously. Although the Coast Guard reports summary financial data by homeland security and non-homeland security missions to the Office of Management and Budget, as a multi-mission agency, the Coast Guard can be conducting multiple mission activities simultaneously. For example, a multi-mission asset conducting a security escort is also monitoring safety within the harbor and could be diverted to conduct a search and rescue case. As a result, it is difficult to accurately detail the level of resources dedicated to each mission. Figure 2 shows the estimated funding levels for fiscal year 2009 by each mission program. However, actual expenditures are expected to vary from these estimates, according to the Coast Guard. The Coast Guard expects to meet 6 of 11 performance targets in fiscal year 2007, the same overall level of performance as 2006, and overall performance trends for most mission-programs remain steady. In fiscal year 2007, as in fiscal year 2006, the Coast Guard met 5 targets—Ports, Waterways, and Coastal Security; Undocumented Migrant Interdiction; Marine Environmental Protection; Other Law Enforcement; and Ice Operations—and agency officials reported that the Coast Guard expects to meet the target for one additional program, Illegal Drug Interdiction, when results become available in August 2008. This potentially brings the number of met targets to 6 out of 11. In addition, the Coast Guard narrowly missed performance targets for 3 of its non-homeland security mission-programs, Search and Rescue, Living Marine Resources, and Aids to Navigation; and more widely missed performance targets for two other mission-programs, Marine Safety and Defense Readiness. Performance in 6 of 11 Coast Guard mission-programs improved in the last year, although improvements in the Marine Safety and Search and Rescue mission- programs were insufficient to meet 2007 performance targets. Alternatively, while performance decreased for the Ports, Waterways, and Coastal Security program, the performance target was still met. Meanwhile, three mission-programs that did not meet 2007 performance targets, Defense Readiness, Living Marine Resources, and Aids to Navigation, demonstrated lowered performance in 2007 compared to 2006 performance. (See App. II for more information on Coast Guard performance results.) In 2006, we completed an examination of the Coast Guard’s non-homeland security performance measures to assess their quality. We reported that while the Coast Guard’s non-homeland security measures are generally sound and the data used to collect them are generally reliable, the Coast Guard had challenges associated with using performance measures to link resources to results. Such challenges included comprehensiveness (that is, using a single measure per mission-program may not convey complete information about overall performance) and external factors outside of the agency’s control (such as weather conditions, which can, for example, affect the amount of ice that needs to be cleared or the number of mariners who must be rescued). According to Coast Guard officials, new performance measures are currently under development to further capture performance for its mission-programs, and that link resources to results. For example, officials described efforts to develop a new measure that captures an additional segment under its search and rescue mission- program, called Lives Unaccounted For. Also, two new measures are under development to further capture the Coast Guard’s risk management efforts and link resources to results under the ports, waterways and coastal security mission-program. As we have reported, the Coast Guard appears to be moving in the right direction with these efforts. However, since these efforts are long-term in nature, it remains too soon to determine how effective the Coast Guard’s larger efforts will be at clearly linking resources to performance results as certain initiatives are not expected to be implemented until 2010. After the September 11, 2001 terrorist attacks, the Coast Guard’s priorities and focus had to shift suddenly and dramatically toward protecting the nation’s vast and sprawling network of ports and waterways. Coast Guard cutters, aircraft, boats and personnel normally used for non-homeland security missions were shifted to homeland security missions, which previously consumed only a small portion of the agency’s operating resources. Although we have previously reported that the Coast Guard was restoring activity levels for many of its non-homeland security mission-programs, the Coast Guard continues to face challenges in balancing its resources among each of its mission-programs. Further complicating this balance issue is the understanding that any unexpected events—a man-made disaster (such as a terrorist attack) or a natural disaster (such as Hurricane Katrina)—could result in again shifting resources between homeland security and non-homeland security missions. It is also important to note that assets designed to fulfill homeland security missions can also be used for non-homeland security missions. For example, new interagency operational centers (discussed in more detail below) can be used to coordinate Coast Guard and other federal and non-federal participants across a wide spectrum of activities, including non-homeland security missions. The Coast Guard’s heightened responsibilities to protect America’s ports, waterways, and waterside facilities from terrorist attacks owe much of their origin to the Maritime Transportation Security Act (MTSA) of 2002. This legislation, enacted in November 2002 established, among other things, a port security framework that was designed to protect the nation’s ports and waterways from terrorist attacks by requiring a wide range of security improvements. The SAFE Port Act, enacted in October 2006, made a number of adjustments to programs within the MTSA-established framework, creating some additional programs or lines of efforts and altering others. The additional requirements established by the SAFE Port Act have added to the resource challenges already faced by the Coast Guard as described below: Inspecting domestic maritime facilities: Pursuant to Coast Guard guidance, the Coast Guard has been conducting annual inspections of domestic maritime facilities to ensure that they are in compliance with their security plans. The Coast Guard conducted 2,126 of these inspections in 2006. However, Coast Guard policy directed that they be announced in advance. The SAFE Port Act added additional requirements that inspections be conducted at least twice per year and that one of these inspections be conducted unannounced. More recently, the Coast Guard has issued guidance requiring that unannounced inspections be more rigorous than before. In February 2008, we reported that fulfilling the requirement of additional and potentially more rigorous inspections, may require additional resources in terms of Coast Guard inspectors. Thus, we recommended that the Coast Guard reassess the adequacy of its resources for conducting facility inspections. The Coast Guard concurred with our recommendation. Inspecting foreign ports: In response to a MTSA requirement, the Coast Guard established the International Port Security Program to assess and, if appropriate, make recommendations to improve security in foreign ports. Under this program, teams of Coast Guard officials conduct country visits to evaluate the implementation of security measures in the host nations’ ports and to collect and share best practices to help ensure a comprehensive and consistent approach to maritime security in ports worldwide. The SAFE Port Act established a minimum number of assessments and congressional direction has called for the Coast Guard to increase the pace of its visits to foreign ports. However, to increase its pace, the Coast Guard may have to hire and train new staff, in part because a number of experienced personnel associated with this inspection program are rotating to other positions as part of the Coast Guard’s standard personnel rotation policy. Coast Guard officials also said that they have limited ability to help countries build on or enhance their own capacity to implement security requirements because—other than sharing best practices or providing presentations on security practices—the program does not currently have the resources or authority to directly assist countries with more in-depth training or technical assistance. Fulfilling port security operational requirements: The Coast Guard conducts a number of operations at U.S. ports to deter and prevent terrorist attacks. Operation Neptune Shield, first issued in 2003, is the Coast Guard’s operations order that sets specific security activities (such as harbor patrols and vessel escorts) for each port. As individual port security concerns change, the level of security activities also change, which affects the resources required to complete the activities. As we reported in October 2007, many ports are having difficulty meeting their port security requirements, with resource constraints being a major factor. Thus, we made a number of recommendations to the Coast Guard concerning resources, partnerships, and exercises. The Coast Guard concurred with our recommendations. Meeting security requirements for additional LNG terminals: The Coast Guard is also faced with providing security for vessels arriving at four domestic onshore LNG import facilities. However, the number of LNG tankers bringing shipments to these facilities will increase considerably because of expansions that are planned or underway. For example, industry analysts expect approximately 12 more LNG facilities to be built over the next decade. As a result of these changes, Coast Guard field units will likely be required to significantly expand their security workloads to conduct new LNG security missions. To address this issue, in December 2007 we recommended that the Coast Guard develop a national resource allocation plan that addresses the need to meet new LNG security requirements. The Coast Guard generally concurred with our recommendation. Boarding and inspecting foreign vessels: Security compliance examinations and boardings, which include identifying vessels that pose either a high risk for non-compliance with international and domestic regulations, or a high relative security risk to the port, are a key component in the Coast Guard’s layered security strategy. According to Coast Guard officials and supporting data, the agency has completed nearly all examinations and boardings of targeted vessels. However, an increasing number of vessel arrivals in U.S. ports may impact the pace of operations for conducting security compliance examinations and boardings in the future. For example, in the 3-year period from 2004 through 2006, distinct vessel arrivals rose by nearly 13 percent and, according to the Coast Guard, this increase is likely to continue. Moreover, officials anticipate that the increase in arrivals will also likely include larger vessels, such as tankers, that require more time and resources to examine. Similarly, the potential increase in the number of arrivals and the size of vessels is likely to impact security boardings, which take place 12 miles offshore, and are consequently even more time- and resource-intensive. While targeted vessels remain the priority for receiving examinations and boardings, it is unclear to what extent increased resource demands may impact the ability of the Coast Guard field units to complete these activities on all targeted vessels. Establishing interagency operational centers: The SAFE Port Act called for the establishment of interagency operational centers (command centers that bring together the intelligence and operational efforts of various federal and nonfederal participants), directing the Secretary of Homeland Security to establish such centers at all high- priority ports no later than 3 years after the Act’s enactment. The Act required that the centers include a wide range of agencies and stakeholders, as the Secretary deems appropriate, and carry out specified maritime security functions. Four existing sector command centers the Coast Guard operates in partnership with the Navy are a significant step toward meeting these requirements, according to a senior Coast Guard official. The Coast Guard is also piloting various aspects of future interagency operational centers at existing centers and is also working with multiple interagency partners to further develop this project. The Coast Guard estimates that the total acquisition cost of upgrading sector command centers into interagency operational centers at the nation’s 24 high priority ports will be approximately $260 million. This includes investments in information systems, sensor networks, and facilities upgrades and expansions. Congress funded a total of $60 million for the construction of interagency operational centers for fiscal year 2008. The Coast Guard has not requested any additional funding for the construction of these centers as part of its fiscal year 2009 budget request. However, the Coast Guard is requesting $1 million to support its Command 21 acquisition project (which includes the continued development of its information management and sharing technology in command centers). So, while the Coast Guard’s estimates indicate that it will need additional financial resources to establish the interagency operational centers required by law, its current budget and longer term plans do not include all of the necessary funding. Updating area maritime security plans: MTSA, as amended, required that the Coast Guard develop, in conjunction with local public and private port stakeholders, Area Maritime Security Plans. The plans describe how port stakeholders are to deter a terrorist attack or other transportation security incident, or secure the port in the event such an attack occurs. These plans were initially developed and approved by the Coast Guard by June 2004. MTSA also requires that the plans be updated at least every five years. The SAFE Port Act added a requirement to the plans specifying that they include recovery issues by identifying salvage equipment able to restore operational trade capacity. This requirement was established to ensure that the waterways are cleared and the flow of commerce through United States ports is reestablished as efficiently and quickly as possible after a security incident. The Coast Guard, working with local public and private port stakeholders, is required to revise their plans and have them completed and approved by June 2009. This planning process may require an investment of Coast Guard resources, in the form of time and human capital at the local port level for existing plan revision and salvage recovery development, as well as at the national level for the review and approval of all the plans by Coast Guard headquarters. In December 2007, we recommended that the Coast Guard develop national level guidance that ports can use to plan for addressing economic consequences, particularly in the case of port closures. The Coast Guard generally concurred with this recommendation. While the Coast Guard continues to be in the vortex of the nation’s response to maritime-related homeland security concerns, it is still responsible for rescuing those in distress, protecting the nation’s fisheries, keeping vital marine highways operating efficiently, and responding effectively to marine accidents and natural disasters. Some of the Coast Guard’s non-homeland security mission-programs are facing the same challenges as its homeland security mission-programs with regard to increased mission requirements as detailed below: Revising port plans into all hazard plans: In February 2007, we reported that most port authorities conduct planning for natural disasters separately from planning for homeland security threats. However, port and industry experts, as well as recent federal actions, are now encouraging an all-hazards approach to disaster planning and recovery—that is, disaster preparedness planning that considers all of the threats faced by the port, both natural (such as hurricanes) and man-made (such as a terrorist attack). For homeland security planning, federal law provides for the establishment of Area Maritime Security Committees with wide stakeholder representation, and some ports are using these committees, or another similar forum with wide representation, in their disaster planning efforts. Federal law also provides for the establishment of separate committees (called Area Committees) for maritime spills of oil and hazardous materials. We recommended that the Secretary of Homeland Security encourage port stakeholders to use existing forums such as these that include a range of stakeholders to discuss all-hazards planning efforts. Revising area plans using an all-hazards approach may require additional Coast Guard resources at the local port level and at the national level. Revising oil spill regulations to protect the Oil Spill Liability Trust Fund: As the recent accident in San Francisco Bay illustrates, the potential for an oil spill exists daily across coastal and inland waters of the United States. Spills can be expensive with considerable costs to the federal government and the private sector. The Oil Pollution Act of 1990 (OPA) authorized the Oil Spill Liability Trust Fund, which is administered by the Coast Guard, to pay for costs related to removing oil spilled and damages incurred by the spill when the vessel owner or operator responsible for the spill—that is, the responsible party—is unable to pay. In September 2007, we reported that the fund has been able to cover costs from major spills—i.e., spills for which the total costs and claims paid was at least $1 million—that responsible parties have not paid, but additional risks to the fund remain, particularly from issues with limits of liability. Limits of liability are the amount, under certain circumstances, above which responsible parties are no longer financially liable for spill removal costs and damage claims. The current liability limits for certain vessel types, notably tank barges, may be disproportionately low relative to costs associated with such spills, even though limits of liability were raised for the first time in 2006. In addition, although OPA calls for periodic regulatory increases in liability limits to account for significant increases in inflation, such increases have never been made. To improve and sustain the balance of the fund, we recommended that the Coast Guard determine what changes in the liability limits were needed. The Coast Guard concurred with our recommendation. Aside from issues related to limits of liability, the fund faces other potential drains on its resources, including ongoing claims from existing spills, spills that may occur without an identifiable source, and therefore, no responsible party, and a catastrophic spill that could strain the fund’s resources. Safeguarding the new national marine monument: In December 2000, Executive Order 13178 authorized the creation of the Northwestern Hawaiian Islands Coral Reef Ecosystem Reserve, called Papahanaumokuakea. The Reserve is about 140,000 square miles in area—slightly smaller than the state of Montana, our 4th largest state. In 2006 the President declared this region a national monument to be monitored by the U.S. Fish and Wildlife Service and National Oceanic and Atmospheric Administration, with support from the State of Hawaii and the Coast Guard. The Coast Guard’s stewardship mission includes preserving the marine environment, which includes monitoring fishing activities and law enforcement, marine species protection, debris recovery and oil spill clean-up and prevention. These activities are supported by collaboration with other organizations, but nevertheless require regular aerial surveillance patrols and monitoring of vessel traffic. To ensure that commercial fishing is limited to selected vessels until 2011, several Coast Guard vessels patrol the region and conduct search and rescue missions, protect threatened species, or respond to potential hazards such as debris or damaged vessels. According to the Coast Guard, monument surveillance has added an additional enforcement responsibility onto an existing mission workload without the benefit of increased funding, personnel, or vessels and aircraft. Increasing polar activity: The combination of expanding maritime trade, tourism, exploratory activities and the shrinking Arctic ice cap may increase the demand for Coast Guard resources across a variety of non-homeland security missions. Moreover, multiple polar nations have recognized the value of natural resources in the Arctic region and have therefore sought to define and claim their own Arctic seabed and supply-chain access. However, the increase in Arctic activity has not seen a corresponding increase in Coast Guard capabilities. For example, two of the three Coast Guard polar ice-breakers are more than 30 years old. The continued presence of U.S.-flagged heavy icebreakers capable of keeping supply routes open and safe may be needed to maintain U.S. interests, energy security, and supply chain security. These new demands, combined with the traditional Polar mission to assist partner agencies such as the National Science Foundation in research while protecting the environment and commercial vessels in U.S. waterways, reflect a need for an updated assessment of current and projected capabilities. In the explanatory statement accompanying the DHS fiscal year 2008 appropriations, the Committees on Appropriations of the House of Representatives and Senate directed the Coast Guard to submit a report that assesses the Coast Guard’s Arctic mission capability and an analysis of the effect a changing environment may have on the current and projected polar operations, including any additional resources in the form of personnel, equipment, and vessels. Over the years, our testimonies on the Coast Guard’s budget and performance have included details on the Deepwater program related to affordability, management, and operations. Given the size of Deepwater funding requirements, the Coast Guard will have a long term challenge in funding the program within its overall and AC&I budgets. In terms of management, the Coast Guard has taken a number of steps to improve program management and implement our previous recommendations. Finally, problems with selected Deepwater assets—the 110-foot patrol boats that were upgraded and converted to 123-foot boats and subsequently grounded due to structural problems —have forced the Coast Guard to take various measures to mitigate the loss of these boats. These mitigating measures have resulted in increased costs to maintain the older 110-foot patrol boats and reallocation of operations across the various missions. These additional costs and mission shifts are likely to continue until the Coast Guard acquires new patrol boats. The Deepwater program represents a significant portion of the Coast Guard’s budget, especially for acquisition, construction and improvements (AC&I). The Deepwater program, at $990 million, accounts for approximately 11 percent of the Coast Guard’s overall $9.3 billion budget request for the entire agency for fiscal year 2009. As noted at the beginning of this statement, the overall federal government faces a long- term fiscal imbalance, which will put increased pressure on discretionary spending at individual agencies. In addition, Deepwater dominates the Coast Guard’s capital spending as it represents nearly 82 percent of the agency’s total AC&I request of $1.21 billion. This leaves relatively little funding for non-homeland security assets which—as we reported last year—compete with the Deepwater program for AC&I resources. For example, many inland aids-to-navigation vessels are reaching the end of their designed service lives and, without major rehabilitation or replacement, their ability to carry out their designated missions will likely decline in the future. While the Coast Guard has considered options for systematically rehabilitating or replacing these vessels, it has requested relatively little funding in the fiscal year 2009 budget request. Specifically, the Coast Guard has requested $5 million in AC&I funds for survey and design activities to allow them to begin examining options for a new vessel to replace the aging inland river aids-to-navigation cutters. As we reported last year, Deepwater continues to represent a significant source of unobligated balances—money appropriated but not yet spent for projects included in previous years’ budgets. The unobligated balances for Deepwater total $566 million as of the end of fiscal year 2007, which is about 56 percent of the Coast Guard’s fiscal year 2009 request for Deepwater. These unobligated balances have accumulated for a variety of reasons—such as technical design problems and related delays—where the Coast Guard has found itself unable to spend previous year acquisition appropriations. For two Deepwater assets where the Coast Guard has postponed acquisition—the Offshore Patrol Cutter and the Vertical Unmanned Aerial Vehicle—the Coast Guard did not request funds for fiscal year 2008. In the fiscal year 2008 appropriation, Congress rescinded $132 million dollars in unobligated balances for these two assets. For fiscal year 2009, the Coast Guard has requested relatively small amounts (approximately $3 million each) for these two assets. Given the magnitude of the program within Coast Guard’s overall and AC&I budgets, affordability of the Deepwater program has been an ongoing concern over the years. Our 1998 report on Deepwater indicated that the Coast Guard’s initial planning estimate for Deepwater was $9.8 billion (in then-year constant dollars) over a 20-year period. At that time, we said that the agency could face major financial obstacles in proceeding with a Deepwater program at that funding level because it would consume virtually all of the Coast Guard’s projected capital spending. Our 2001 testimony noted that affordability was the biggest risk for the Deepwater program because the Coast Guard’s contracting approach depended on a sustained level of funding each fiscal year over the life of the program. In 2005, the Coast Guard revised the Deepwater implementation plan to consider post-9/11 security requirements. The revised plan increased overall cost estimates from $17 billion to $24 billion, to include annual appropriations ranging from $650 million to $1.5 billion per year through fiscal year 2026. Continuing into future budgets, Deepwater affordability will continue to be a major challenge to the Coast Guard given the other demands upon the agency for both capital and operations spending. In the wake of serious performance and management problems, the Coast Guard is making a number of changes to improve the management of the Deepwater program. The Coast Guard is moving away from the ICGS contract and the “system-of-systems” model, with the contractor as systems integrator, to a more traditional acquisition strategy, where the Coast Guard will manage the acquisition of each asset separately. It has recognized that it needs to increase government management and oversight and has begun to transfer system integration and program management responsibilities back to the Coast Guard. The Coast Guard began taking formal steps to reclaim authority over decision-making and to more closely monitor program outcomes. It has also begun to competitively purchase selected assets, expand the role of third parties to perform independent analysis, and reorganize and consolidate its acquisition function to strengthen its ability to manage projects. The Coast Guard also continues to make progress in implementing our earlier recommendations to better manage the Deepwater program. In March 2004, we made 11 recommendations to the Coast Guard to address three broad areas of concern: improving program management, strengthening contractor accountability, and promoting cost control through greater competition among subcontractors. Of the five recommendations that remained open as of our June 2007 report, we have closed two, pertaining to the Coast Guard’s use of models and metrics to measure the contractor’s progress toward improving operational effectiveness and establishing criteria for when to adjust the total ownership baseline. The Coast Guard has taken actions on the three recommendations that remain open, such as designating Coast Guard officials as the lead on integrated product teams, developing a draft maintenance and logistics plan for the Deepwater assets, and decreasing their reliance on ICGS, including potentially eliminating the award term provision from the ICGS contract. Deferring acquisitions of new vessels and aircraft can affect the cost of operations, in that the cost-savings and reliability advantages of new or modernized assets may not be realized, and the cost of maintaining older assets can increase. For example, delays in the acquisition of new patrol boats have forced the Coast Guard to incur additional costs to maintain the older patrol boats. As part of its Deepwater program, the Coast Guard planned to have ICGS convert all 49 existing 110-foot patrol boats into 123- foot patrol boats with additional capabilities. This conversion project was halted after the first eight 110-foot patrol boats were converted and began to suffer structural and operational problems. In November 2006, all eight 123-foot patrol boats were removed from service and the Coast Guard had to take steps to better sustain its remaining 110-foot patrol boats. In fiscal year 2005, as the 123-foot patrol boats conversion was experiencing problems, the Coast Guard initiated the Mission Effectiveness Project to replace portions of the hull structure and mechanical equipment on selected 110-foot patrol boats to improve their overall mission effectiveness until a new replacement patrol boat is ultimately delivered. The Coast Guard has been appropriated a total of $109.7 million for this effort through fiscal year 2008, and in its fiscal Year 2009-2013 Five Year Capital Investment Plan indicates it will need an additional $56.3 million through fiscal year 2012. In addition, the Coast Guard plans on implementing a “high tempo, high maintenance” initiative for eight of its 110-foot patrol boats. This initiative is aimed at increasing the number of annual operational hours for these eight patrol boats, at a cost of $11.5 million in fiscal year 2008. The removal of the 123-foot patrol boats from service has also increased operational costs in terms of lost or reallocated missions. The loss of the eight 123-foot patrol boats created a shortage of vessels in District 7, where they were all homeported (i.e., based). As a result, the Coast Guard developed various strategies to mitigate the loss of these boats in District 7--which impacted the ability of the Coast Guard to interdict illegal migrants. One of the Coast Guard’s strategies was to shift deployments of some vessels to District 7 from other districts within the Coast Guard’s Atlantic Area. In fiscal year 2007 the Coast Guard redeployed several vessels--which contributed approximately 6,600 operational hours in District 7–from Districts 1, 5, 8 and the Atlantic Area Command. As discussed in the previous section, the Coast Guard faced a trade off between homeland security missions and non-homeland security missions. In general, this mitigating strategy has led to increased homeland security operations in District 7 (e.g., for migrant interdiction) at the expense of some non-homeland security missions (e.g., living marine resources and aids to navigation) in the Districts providing the assets. For example, District 5 officials estimated that the loss of one medium-endurance cutter deployment from its district to District 7 reduced its non-homeland security operations by potentially preventing District 5 from performing approximately 24 vessel boardings and issuing 17 violation notices in its living marine resources mission. These additional costs will likely continue until the Coast Guard can acquire the replacement patrol boat—the Fast Response Cutter (FRC)— the FRC was conceived as a patrol boat with high readiness, speed, adaptability and endurance. ICGS proposed a fleet of 58 FRCs constructed of composite materials (later termed FRC-As). Although estimates of the initial acquisition cost for these composite materials were high, they were chosen for their perceived advantages over other materials (e.g., steel), such as lower maintenance and life-cycle costs, longer service life, and lower weight. However, in February 2006 the Coast Guard suspended FRC-A design work in order to assess and mitigate technical risks. As an alternative to the FRC-A, the Coast Guard planned to purchase 12 modified commercially available patrol boats (termed FRC-Bs). In June 2007, the Coast Guard issued a request for proposals for the design, construction and delivery of a modified commercially available patrol boat for the FRC-B. In late 2006, the Coast Guard estimated that the total acquisition cost for 12 FRC-Bs would be $593 million. The Coast Guard expects to award the FRC-B contract in the third quarter of fiscal year 2008, with the lead patrol boat to be delivered in 2010. Coast Guard officials stated that their goal is still to acquire 12 FRC-Bs by 2012. The Coast Guard intends to award a fixed price contract for design and construction of the FRC-B, with the potential to acquire a total of 34 cutters. Madam Chair and Members of the Subcommittee, this completes my prepared statement. I will be happy to respond to any questions that you or other Members of the Subcommittee may have. For information about this statement, please Contact Stephen L. Caldwell, Director, Homeland Security and Justice Issues, at (202) 512-9610, or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. This testimony was prepared under the direction of Dawn Hoff, Assistant Director. Other individuals making key contributions to this testimony include Jonathan Bachman, Christopher Conrad, Adam Couvillion, Anthony DeFrank, Wayne Ekblad, Susan Fleming, Jessica Gerrard-Gough, Geoffrey Hamilton, Maura Hardy, Christopher Hatscher, John Hutton, Lara Kaskie, Monica Kelly, J. Kristopher Keener, Daniel Klabunde, Richard Krashevski, Ryan Lambert, Scott Purdy, Ralph Roffo, Michele Mackin, James McTigue, Linda Miller, Kate Siggerud, April Thompson, Tatiana Winger, and Susan Zimmerman. Appendix I provides information on key vessels and aircraft that are part of the Deepwater program. In 2005, the Coast Guard revised its Deepwater acquisition program baseline to reflect updated cost, schedule, and performance measures. The revised baseline accounted for, among other things, new requirements imposed by the events of September 11. The initially-envisioned designs for some assets, such as the Offshore Patrol Cutter and Vertical Unmanned Aerial Vehicle, are being rethought. Other assets, such as the National Security Cutter and Maritime Patrol Aircraft, are in production. Table 2 shows the 2005 baseline and current status of selected Deepwater assets. Appendix II provides a detailed list of Coast Guard performance results for the Coast Guard’s 11 programs from fiscal years 2003 through 2007. Coast Guard: Deepwater Program Management Initiatives and Key Homeland Security Missions. GAO-08-531T. Washington, D.C.: Mar. 5, 2008. Maritime Security: Coast Guard Inspections Identify and Correct Facility Deficiencies, but More Analysis Needed of Program’s Staffing, Practices, and Data. GAO-08-12. Washington, D.C.: Feb. 14, 2008. Long-Term Fiscal Outlook: Action Is Needed to Avoid the Possibility of Serious Economic Disruption in the Future. GAO-08-411T. Washington, D.C.: Jan 29, 2008. Maritime Transportation: Major Oil Spills Occur Infrequently, but Risks to the Federal Oil Spill Fund Remain. GAO-08-357T. Washington, D.C.: Dec. 18, 2007. A Call for Stewardship: Enhancing the Federal Government’s Ability to Address Key Fiscal and Other 21st Century Challenges. GAO-08-93SP. Washington, D.C.: Dec. 17, 2007. Maritime Security: Federal Efforts Needed to Address Challenges in Preventing and Responding to Terrorist Attacks on Energy Commodity Tankers. GAO-08-141. Washington, D.C.: Dec. 10, 2007. Homeland Security: TSA Has Made Progress in Implementing the Transportation Worker Identification Credential Program, but Challenges Remain. GAO-08-133T. Washington, D.C.: Oct. 31, 2007. Maritime Security: The SAFE Port Act: Status and Implementation One Year Later. GAO-08-126T. Washington, D.C.: Oct. 30, 2007. Maritime Transportation: Major Oil Spills Occur Infrequently, but Risks to the Federal Oil Spill Fund Remain. GAO-07-1085. Washington, D.C.: Sep. 7, 2007. Information on Port Security in the Caribbean Basin. GAO-07-804R. Washington, D.C.: June 29, 2007. Coast Guard: Challenges Affecting Deepwater Asset Deployment and Management and Efforts to Address Them. GAO-07-874. Washington, D.C.: June 18, 2007. Coast Guard: Observations on the Fiscal Year 2008 Budget, Performance, Reorganization, and Related Challenges. GAO-07-489T. Washington, D.C.: Apr. 18, 2007. Transportation Security: TSA Has Made Progress in Implementing the Transportation Worker Identification Credential Program, but Challenges Remain. GAO-07-681T. Washington, D.C.: Apr. 12, 2007. Port Risk Management: Additional Federal Guidance Would Aid Ports in Disaster Planning and Recovery. GAO-07-412. Washington, D.C.: Mar. 28, 2007. Maritime Security: Public Consequences of a Terrorist Attack on a Tanker Carrying Liquefied Natural Gas Need Clarification. GAO-07-316. Washington, D.C.: Feb. 22, 2007. Coast Guard: Condition of Some Aids to Navigation and Domestic Icebreaking Vessels Has Declined: Effect on Mission Performance Appears Mixed. GAO-06-979. Washington, D.C.: Sep. 22, 2006. Coast Guard: Non-Homeland Security Performance Measures Are Generally Sound, but Opportunities for Improvement Exist. GAO-06-816. Washington, D.C.: Aug. 16, 2006. Coast Guard: Status of Deepwater Fast Response Cutter Design Efforts. GAO-06-764. Washington, D.C.: June 23, 2006. Coast Guard: Observations on Agency Performance, Operations, and Future Challenges. GAO-06-448T. Washington, D.C.: June 15, 2006. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: Dec. 15, 2005. Maritime Security: New Structures Have Improved Information Sharing, but Security Clearance Processing Requires Further Attention. GAO-05-394. Washington, D.C.: Apr. 15, 2005. Coast Guard: Observations on Agency Priorities in Fiscal Year 2006 Budget Request. GAO-05-364T. Washington, D.C.: Mar. 17, 2005. Coast Guard: Key Management and Budget Challenges for Fiscal Year 2005 and Beyond. GAO-04-636T. Washington, D.C.: Apr. 7, 2004. Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight. GAO-04- 380. Washington, D.C.: Mar. 9, 2004. Coast Guard: Challenges during the Transition to the Department of Homeland Security. GAO-03-594T. Washington, D.C.: Apr. 1, 2003. Coast Guard: Budget and Management Challenges for 2003 and Beyond. GAO-02-538T. Washington, D.C.: Mar. 19, 2002. Coast Guard: Actions Needed to Mitigate Deepwater Project Risks. GAO-01-659T. Washington, D.C.: May 3, 2001. Coast Guard Acquisition Management: Deepwater Project’s Justification and Affordability Need to be Addressed More Thoroughly. GAO/RCED-99-6, Washington, D.C.: Oct. 26, 1998. Coast Guard: Challenges for Addressing Budget Constraints. GAO/RCED-97-110. Washington, D.C.: May 1997. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The U.S. Coast Guard, a multi-mission maritime military service within the Department of Homeland Security, has requested more than $9 billion for fiscal year 2009 to address its responsibilities for homeland security missions (such as undocumented migrant interdiction) and non-homeland security missions (such as environmental protection). Integral to conducting its missions, is the Deepwater program--a 25-year, $24 billion effort to upgrade or replace vessels and aircraft. This testimony discusses: budget request and trends, and performance statistics, challenges in balancing operations across multiple missions, and Deepwater affordability, management, and its impact on operations. GAO's comments are based on products issued from 1997 to 2008. This testimony also discusses on-going work related to patrol boat operations. To conduct its work, GAO analyzed documentation and interviewed relevant officials. The Coast Guard's fiscal year 2009 budget request is approximately 7 percent higher than its fiscal year 2008 enacted budget, generally because of proposed increases in both operating expenses and acquisition, construction, and improvements funding. The Coast Guard expects to meet its performance goals for 6 of its 11 mission areas for fiscal year 2007, similar to the performance it reported for fiscal year 2006. The Coast Guard also continues to develop additional measures to better understand the links between resources expended and results achieved. The Coast Guard continues to face challenges balancing its various missions with its finite resources and has had difficulties funding and executing both its homeland security and non-homeland security missions. GAO's work has shown that the Coast Guard's homeland security requirements continue to increase and that it has been unable to keep up with these rising security demands. For example, the Coast Guard is not meeting its requirements for providing vessel escorts and conducting security patrols. The Coast Guard is also facing additional requirements to conduct more inspections of maritime facilities and provide security at a growing number of facilities that import hazardous cargos. The Deepwater acquisition program continues to be a source of challenges and progress for the Coast Guard. In terms of affordability, the magnitude of Deepwater funding--representing about 11 percent of the agency's proposed fiscal year 2009 budget--presents a long-term challenge. In terms of management, the Coast Guard has made changes to improve program management by moving away from reliance on a system integrator, increasing government monitoring of program outcomes and competitively purchasing selected assets. In terms of operations, delays in the procurement of new patrol boats have increased resource requirements to maintain older legacy patrol boats and keep them operating. |
Historically, state governments have sued and been sued by others in federal court for intellectual property infringement just like any other owner or user of intellectual property. The landscape changed dramatically in June 1999, however, when the Supreme Court ruled in Florida Prepaid Postsecondary Education Expense Board v. College Savings Bank that states could claim immunity under the Eleventh Amendment to the U.S. Constitution when sued in federal court for infringement. The term “intellectual property” is commonly used to refer to four types of intangible property—patents, trademarks, copyrights, and trade secrets. Patents are granted and trademarks are registered by the USPTO within the Department of Commerce, while copyrights are registered by the Copyright Office within the Library of Congress. Only the federal government issues patents and registers copyrights, while trademarks may also be registered by states that have their own registration laws. Trade secrets—which are not addressed in this report—are governed by state law. Anyone who uses the intellectual property of another without proper authorization is said to have “infringed” the property. Traditionally, an intellectual property owner’s remedy for such unauthorized use would be a lawsuit for injunctive and monetary relief. Federal law provides that lawsuits for patent and copyright infringements must be brought in federal court. Trademark suits for federally registered trademarks may be brought in either federal or state court. In the 1980s, the Congress grew concerned that some states were claiming that the Eleventh Amendment to the U.S. Constitution provided them immunity when sued for intellectual property infringement in federal court. Moreover, the Supreme Court ruled in 1985 that, to abrogate such immunity, Congress must “mak(e) its intention unmistakably clear in the language of the statute.” In response to these concerns, the Congress in the early 1990s passed “clarification” laws for patents, trademarks, and copyrights to provide that states (1) could commit infringement and (2) could be sued for infringement in federal court. The reasoning behind these laws was that the states should be subject to the same rules as other users of intellectual property if they desired to be protected by those rules. In 1994, College Savings Bank, a New Jersey corporation, brought a federal suit against the Florida Prepaid Postsecondary Education Expense Board, a state agency, for infringing College Savings’ patent for certain certificates of deposit/annuity contracts. When Florida Prepaid asserted that it was immune to the suit under the Eleventh Amendment, College Savings Bank argued that such a defense was no longer valid because the state’s immunity had been abrogated by the Patent and Plant Variety Remedy Clarification Act. The federal district court and court of appeals agreed with College Savings Bank and held the act to be valid. However, the U.S. Supreme Court disagreed with the lower courts and struck down the act in June 1999 in its Florida Prepaid decision. Following a line of cases begun in 1996, the Supreme Court reiterated that the Congress did not have the authority to abrogate a state’s Eleventh Amendment immunity under the powers given the legislative branch under Article I of the U.S. Constitution. The Court said that the Congress did have authority under the due process clause of the Fourteenth Amendment to abrogate state immunity, but in this instance it did not show that the states (1) had engaged in a pattern of infringement or (2) did not have suitable remedies of their own. Finding that the legislative history contained no such evidence, the Court ruled that the Congress’ attempt to abrogate Eleventh Amendment immunity in patent infringement cases did not meet the requirements of the Fourteenth Amendment and that, consequently, the patent clarification act was invalid. The Supreme Court’s decision in Florida Prepaid dealt with patent infringement. However, based on a companion case involving unfair competition decided by the Supreme Court on the same day as Florida Prepaid and its action in a copyright infringement case remanded and later decided by the Fifth Circuit Court of Appeals in February 2000, it is generally believed that the Florida Prepaid decision applies to all forms of federally protected intellectual property. Some members of the intellectual property community have raised concerns over the ramifications of Florida Prepaid. Specifically, they find the current situation to be unfair, because states—which themselves are owners of intellectual property—benefit from the protection of the federal intellectual property laws but do not have to be bound by them. Furthermore, these members say there is no effective remedy for state infringement of patents and copyrights if the states cannot be sued in federal court. These concerns were the topic of a discussion group convened by the USPTO on March 31, 2000, that included the USPTO, the Copyright Office, attorneys and associations representing various interests within the intellectual property community, legal scholars, and state officials. They also were the subject of a hearing on June 27, 2000, by the Subcommittee on Courts and Intellectual Property, House Committee on the Judiciary, that included the USPTO, the Copyright Office, and two legal scholars. An analysis of state ownership of intellectual property was beyond the scope of this report. However, appendix II provides a summary of patents, trademarks, and copyrights owned by state institutions of higher education, based on the information available from the USPTO and the Copyright Office. Based on the best data available, accusations against the states for intellectual property infringement appear to be few. The precise number cannot be determined because not all accusations result in lawsuits; those that do will not always be in a published decision; and those that do result in a published decision are not always identifiable as involving accusations of intellectual property infringement or state defendants. Our analysis of published case law and surveys of the states identified 58 lawsuits since January 1985 alleging infringement or unauthorized use of intellectual property by state entities. Forty-seven of these lawsuits against states were brought in federal court, accounting for less than 0.05 percent of all federal intellectual property lawsuits filed during the period reviewed, while 11 had been brought in state court. Twenty-seven of the 58 infringement lawsuits—23 federal and 4 state—had been decided in favor of the state defendants or dismissed. The states appear to resolve more accusations of infringement out of court than through lawsuits. However, these instances also appear to be few in number. Of the 99 state institutions of higher education that returned our surveys, for example, 35 said they had not dealt with any accusations at all since January 1985 and 42 said they had dealt with 5 or fewer. Identifying all past accusations of intellectual property infringement against the states over any period is difficult, if not impossible, because there are no summary databases providing such information. The published case law is an incomplete record, because (1) both the federal and state courts report only those cases in which decisions were rendered and (2) state courts usually report only appellate decisions. Thus, lawsuits that were dropped or settled by any court prior to a decision as well as those decided by state trial courts might not appear in the published case law. Furthermore, accusations that are made through such mechanisms as cease-and-desist letters that were resolved administratively without a lawsuit being filed would not appear in the published case law. It is also difficult to identify lawsuits for which the underlying accusations appear to be claims of infringement, but the lawsuits themselves were brought under some other cause of action. For example, a lawsuit that involves a contract dispute might also include an accusation of unauthorized use of intellectual property. Similarly, a lawsuit in state court over what appears to be an accusation of patent or copyright infringement might have been brought under some state-recognized cause of action. Even where infringement lawsuits can be identified, it is not always possible to determine whether one of the parties was a state entity that could claim immunity. For example, some organizations that have the name of the state in their own title (e.g., California Institute of Technology) are not state entities while other organizations not carrying the state name (e.g., Auburn University) are nevertheless entities of the state. Moreover, not all state entities qualify for Eleventh Amendment immunity. For example, some Pennsylvania universities generally considered to be public institutions are only quasi-state entities for litigation purposes and do not have immunity in federal courts. Similarly, the community colleges in some states could have Eleventh Amendment immunity while those in other states might not. In still other cases, a particular entity’s ability to claim immunity may be unknown. Because of the difficulties in identifying accusations of infringement against the states through the case law, we supplemented our search with surveys to state attorneys general and state institutions of higher education. Attorneys general were selected because they are the primary legal authorities in the executive branches of their respective states. State institutions of higher education were selected because they are among the most significant state entities in terms of ownership and use of patents, trademarks, and copyrights. Our surveys asked for information on both lawsuits and matters resolved out of court since January 1985. Thirty-six of the 50 attorneys general (about 72 percent) and 99 of the 140 institutions of higher education (about 71 percent) responded to our survey. The survey responses offered no assurance that we had identified all the accusations of infringement or unauthorized use of intellectual property made against the states, as the respondents themselves did not always have such information. The state attorneys general are not necessarily informed of accusations of infringement against other state entities (see app. III, tables 9, 25, and 28). Similarly, legal representatives from some state institutions of higher education we contacted told us that, while they generally could identify the few lawsuits to which they had been parties, they did not always have formal mechanisms for identifying actions dealt with administratively. In order to respond to our requests, some attorneys general and state institutions of higher education told us that they had had to research detailed case files or rely on the collective memory of staff. Even these were a problem in researching accusations beyond recent years, as the files were not organized for such a search and the current staff may not have been in place since January 1985. We identified a total of 58 lawsuits involving accusations of the unauthorized use of intellectual property that were active at some time since January 1985, and where state entities were the defendants (see table 1 and app. IV, table 47). These included (1) lawsuits where the stated cause of action was infringement of a patent, trademark, or copyright, (2) requests for declaratory judgments, and (3) lawsuits brought under some cause of action other than infringement but where the state nevertheless appears to have been accused of the unauthorized use of intellectual property. Forty-seven of the 58 lawsuits that we identified were brought in federal court. In analyzing these cases, we noted the following: Twenty states were involved in one or more lawsuits each. One state was a defendant in 10 suits, 2 were defendants in 5 each, 3 were defendants in 3 each, 4 were defendants in 2 each, and 10 were defendants in 1 each. Thirty-two of the lawsuits involved state institutions of higher education; the remaining 15 involved other entities of the states. Thirty-five of the lawsuits involved infringement actions, while the remaining 12 involved requests for declaratory judgments only. The defendant states were the prevailing party in all 23 lawsuits resolved by the courts. Ten lawsuits were decided, and 13 lawsuits were dismissed. Of the 13 lawsuits dismissed, 10 were dismissed because the state defendant was found to have Eleventh Amendment immunity. Of these, 6 were dismissed prior to and 4 were dismissed as part of or after the June 1999 Florida Prepaid and College Savings Bank decisions. The Eleventh Amendment was also raised in some other cases that were settled or still active. For example, the Court of Appeals for the Fifth Circuit found the state to have immunity in Chavez v. Arte Publico Press. The parties settled the case prior to a final decision by the district court to which the case had been remanded. Of the 11 lawsuits heard in state court, we noted the following: Two states were defendants in three lawsuits each, and five states were defendants in one lawsuit each. Five of the lawsuits involved state institutions of higher education; the remaining six involved other entities of the states. Four of the lawsuits brought in federal court were also brought in state court. In three cases, the state actions were introduced after the federal court decided or dismissed the federal lawsuits against the states. In the fourth case, the federal action was introduced after the state court dismissed the state lawsuit against the state. The state was the prevailing party in the four cases resolved by state courts—two by rendering a decision and two by dismissing the action. Of the two lawsuits dismissed, one was because the court said it lacked jurisdiction on what was essentially a copyright infringement claim, and one was because the court determined the state was not a party to the unauthorized use of intellectual property. We identified an additional 42 lawsuits—36 federal and 6 state—active in federal or state court since January 1985 where the state was a plaintiff(see app. IV, table 48). While a complete analysis of such cases was beyond the scope of our review, we include them to provide additional information on the extent to which states are involved in litigating intellectual property infringement suits. The lawsuits against states also appear to be few in number when compared to the number of infringement lawsuits against all defendants. Statistics accumulated by the Administrative Office of the U.S. Courts show 104,898 district court cases were filed from fiscal year 1985 through fiscal year 2000 that involved protected property rights for patents, trademarks, and copyrights (see app. V, table 49). Thus, the 47 federal cases we identified accounted for 0.045 percent of all the federal lawsuits filed over this period that involved possible intellectual property infringements. We did not identify state court statistics that could be used for comparison. During our visits to three states, state officials acknowledged that they were more likely to handle an accusation of intellectual property infringement administratively than to be the defendant in a lawsuit. They said the reason was that they do not intentionally infringe or misuse the property of others and, when confronted with an infringement accusation, they investigate the matter thoroughly. If they find no infringement, they say they advise the complaining party and provide their rationale. If they do find a potentially infringing use, they say they attempt to make amends by ceasing such use, obtaining a license, or reaching some type of monetary settlement. The state officials noted that it was very difficult for them to identify matters they had resolved administratively, as these matters can arise and be dealt with in different ways. One way they are accused of infringement is through a cease-and-desist letter, where the complainant advises the state entity of its ownership of a particular property, the nature of the state’s unauthorized use, the actions required of the state entity, and the consequences if such actions are not taken. Not all notifications to the state are this formal, however, nor are they necessarily written. Similarly, the state’s response may vary depending on the circumstances. In some cases, the state provides a rationale for the use of the property, does not receive a response from the complainant, and eventually considers the matter dropped. In other cases, the state may take some remedial action, although not necessarily the action requested. We asked the state attorneys general and state institutions of higher education that we surveyed to estimate the number, within specific ranges, of infringement accusations made against the states since January 1985 that had been dealt administratively without a lawsuit being filed. Six of the 36 attorneys general responding to our request said they had identified no such matters handled by their states while another 12 said that they did not know if their states had dealt with any accusations at all. Of the 18 attorneys general that did identify such matters, 11 identified between 1 and 5 matters each, 4 identified between 6 and 10, 1 identified between 11 and 15, and 2 identified between 16 and 30 (see app. III, table 11). Thirty-five of the 99 state institutions of higher education that responded to our request said that they identified no accusations of intellectual property infringement dealt with out of court, while 10 said that they did not know if they had dealt with any. Of the 54 that did identify such matters, 42 said they had dealt with between 1 and 5 each, 4 said that they had dealt with between 6 and 10, 7 said that they had dealt with between 11 and 15, and 1 said that it had dealt with between 16 and 30 (see app. III, table 29). “We refer to these events as ‘matters’ because in the overwhelming majority of cases, no litigation actually results. Instead, after the SIIA learns of a possible infringement, it contacts the infringing entity to request an audit of its existing software, and attempts to bring that entity into compliance with the law. Normally, the entity will then pay a penalty and a license fee for the number of unauthorized copies it is using…” Representatives from the SIIA told us that, while they agree that state institutions of higher education are significant users of intellectual property, there are many other users in the state also, particularly in regard to software. The association noted that, of the 77 infringement matters it identified, about 50 percent involved state institutions of higher education while the rest involved state hospitals, bureaus, public service commissions, and other instrumentalities. According to the state officials, legal scholars, and other members of the intellectual property community we contacted, few alternatives or remedies appear to remain after Florida Prepaid for intellectual property owners who believe that a state has infringed their property. A state cannot be sued in federal court for damages except in the unlikely event the state waives its Eleventh Amendment immunity. If the state cannot be sued for damages, the only other alternative in federal court would be to obtain an injunction against the infringing state official. This is seen as an incomplete remedy because, while it might stop the person enjoined from continuing the infringement, the state would not be liable for monetary damages. It is too early to tell whether the state courts provide adequate alternatives or remedies for state infringement after Florida Prepaid, as there have been so few lawsuits attempted in state court to date. However, many of the representatives of the intellectual property community whom we contacted did not see the state courts as a viable alternative. They said that a state court probably would not hear a patent or copyright infringement lawsuit because federal law requires such suits to be brought in federal court. Thus, for a patent or copyright lawsuit against any party to succeed in state court, the intellectual property owner would have to convince the court that damages were recoverable under some state- recognized cause of action—such as a taking of private property under a reverse eminent domain theory—which has yet to be tested in an intellectual property context and subjected to appellate court review. The representatives of the intellectual property community noted that, even if such causes of action were accepted in state court, they might not be of any value against a state infringer because the state may have immunity in its own courts under state law. As in attempting to enumerate past accusations of infringement against the states, identifying the alternatives and remedies available to an intellectual property owner who believes a state has committed infringement after Florida Prepaid is difficult, if not impossible, because (1) there are no databases showing this information, (2) the alternatives and remedies may vary by state and type of intellectual property, and (3) any alternatives or remedies that might be available are largely untested. To identify potential alternatives and remedies, we elicited the views of state officials, legal scholars, and other members of the intellectual property community. We also included questions on this subject in the surveys we sent to state attorneys general and state institutions of higher education as well as in separate questionnaires to the 37 state bar associations that we identified as having intellectual property sections. Many of the officials we contacted reiterated the general view that Florida Prepaid severely limits a plaintiff’s ability to bring a lawsuit against a state for intellectual property infringement in federal court. Lawsuits seeking damages in federal court were seen as impossible unless the defendant states waived their immunity—an action they were not seen as likely to take. The remaining alternative in federal court would be to obtain an injunction against the infringing state official, an action that might stop the continuing infringement but would not result in the state’s reimbursing the intellectual property owner for past harm. We did not identify any infringement lawsuits in which state defendants had voluntarily waived their immunity in federal court. In the surveys we sent to state attorneys general, state institutions of higher education, and bar associations, we asked the respondents whether state entities had the right to waive immunity in federal court. The majority of respondents said that either the state entities did not have the authority to waive or these respondents did not know whether waiver was possible. Specifically, they noted the following: Four of the 36 attorneys general responding said that their states had the right to waive immunity, while 22 said there was no such right and 10 said they did not know if the state could waive immunity. Of the 22 respondents that said the state did not have the right to waive, 6 cited their state constitutions as the prohibiting authority while 5 cited state statutes, 7 cited case law, 3 cited some other authority, and 1 did not provide an authority (see app. III, tables 13 and 15), Twelve of the 99 state institutions of higher education responding said that they had the right to waive immunity, whereas 58 said there was no such right, 20 did not know, and 9 did not answer the question. Twenty of the 58 respondents that said they could not waive immunity cited their state constitutions as the prohibiting authority while 2 cited state statutes, 9 cited case law, 14 cited some other authority, and 3 did not respond (see app. III, tables 31 and 33). Five of the 21 bar associations responding said that their states had the right to waive immunity, while 6 said there was no such right and 10 said they did not know. Three of the 6 respondents that said their states could not waive immunity cited their state constitutions as the prohibiting authority, 1 cited state statutes, and 2 cited case law (see app. III, tables 36 and 38). Members of the intellectual property community have noted that states have no incentive to waive Eleventh Amendment immunity in federal court. In the three states we visited, state officials said they would not waive immunity. They noted that, as discussed above, they do not infringe knowingly and make every effort to resolve any infringement that does occur. They said that, if subjected to a lawsuit, they thus would disagree with the accusation and would not give up any possible defense— including Eleventh Amendment immunity—that would allow them to avoid expensive and unnecessary litigation. Similarly, we discussed this issue with private attorneys who noted that an attorney representing the state would have to raise the immunity defense and that not doing so might present a question of malpractice. We did identify two lawsuits decided since the Florida Prepaid decision in which the federal district courts found a “constructive waiver” on the part of the state defendants. Many of the state officials and other members of the intellectual property community we contacted believed that, even after the Florida Prepaid decision, it was possible to get an injunction in federal court to prevent an ongoing infringement by a state entity. The federal injunction theory is based on the premise that, although the state itself cannot be sued for infringement in federal court, an intellectual property owner can get an injunction against the infringing state official. The federal court injunction remedy may have its own limitations. Generally, the plaintiff would not be entitled to any monetary damage for past harm from the state itself. Another problem, according to one attorney we contacted, is that an injunction in the past normally would be granted in the course of a federal infringement suit for damages. Because there would be no separate federal action for damages if the state had immunity, the plaintiff might still have to go through an expensive and protracted lawsuit to obtain the injunction without any expectation that damages would be paid. The respondents to our surveys had mixed or no opinions on the value of the federal injunction as an alternative or remedy. When asked if they agreed that an intellectual property owner could get an injunction against a state employee for infringement in federal court, 5 of the 36 attorneys general responding to our surveys said they “strongly agree,” while 7 said they “somewhat agree,” 4 were neutral on the subject, 1 would “somewhat disagree,” 4 said they “strongly disagree,” and 15 had no opinion (see app. III, table 16). We also queried the bar associations on this issue. Among the 21 responding, 3 said they “strongly agree,” 5 said they “somewhat agree,” 1 was neutral on the subject, 5 would “somewhat disagree,” 3 said they “strongly disagree,” and 4 had no opinion (see app. III, table 39). When asked for their opinions on whether alternatives or remedies were available in federal court other than an infringement suit where a state had waived its immunity or an injunction against a state official, most survey respondents either said there were no other options or had no opinion. Among the 36 attorneys general that responded, 1 said that any other alternatives or remedies were available in federal court, while 11 said there were none and 24 said they had no opinion (see app. III, table 17). Seven of the 21 bar associations responding said there may be some other alternative or remedy in federal court, while 7 said there were not and 7 said they had no opinion (see app. III, table 40). If the federal courts are unavailable, the other potential forum for pursuing a lawsuit against a state for damages would be the state courts. While this is an option for trademarks, many of those we contacted saw little chance of success with infringement-type actions in state court for patents and copyrights because of federal judicial preemption and an absence of state- recognized causes of action. Furthermore, even if infringement suits can be brought in state court, it may not be possible to bring them against states that have governmental immunity shielding them from suit in their own courts. We asked both the attorneys general and the intellectual property sections of state bar associations about the possibility of bringing infringement suits in state court. Ten of the 36 attorneys general that responded said that infringement lawsuits could be brought in their state courts, while 5 said they could not and 21 had no opinion (see app. III, table 18). Seven of the 21 bar associations that responded said such suits could be brought in their state courts, while 7 said they could not, 6 had no opinion, and 1 did not respond to the question (see app. III, table 41). The first hurdle to bringing an intellectual property infringement action against a state in state court is federal judicial preemption in patent and copyright cases. Section 1338 of Title 28 of the U.S. Code gives the federal district courts “original jurisdiction of any civil action arising under any Act of Congress relating to patents, plant variety protection, copyrights and trademarks.” Section 1338 further provides that “Such jurisdiction shall be exclusive of the courts of the states in patent, plant variety protection and copyright cases.” The exclusive jurisdiction of the federal courts may be an insurmountable bar to a plaintiff who would seek a remedy for patent and copyright infringement in state court, regardless of whether the defendant was a state or a private party. Representatives from the intellectual property community that we contacted repeatedly brought up this problem as a reason why these cases would not be heard in state court. Seven of the 36 attorneys generals and 16 of the 21 bar associations that responded to our surveys saw federal judicial preemption as such an impediment (see app. III, tables 20 and 43). Federal judicial preemption is a problem only for patents and copyrights, as state courts are able to hear trademark cases. However, the federal courts traditionally have served as the preferred forum. An attorney who specializes in trademark cases noted that trademark actions generally have been brought in federal court in the past because (1) most trademarks are federally registered; (2) suits on federally registered trademarks can address interstate infringements; (3) infringement suits are easier to bring in federal court because the burden of proof shifts to the other party if the trademark owner can prove that the mark is registered with the USPTO; and (4) federal courts are seen as more convenient because the federal judges are experienced in these types of actions and the law is uniform nationwide. Eight attorneys general said that trademark infringement suits were possible in their state courts and 7 bar associations believed the state could be sued for trademark infringement in state court (see app. III, tables 18, 19, 41, and 42). Because patent and copyright infringement suits must be brought in federal court, an intellectual property owner wishing to bring a suit in state court for the unauthorized use of intellectual property—regardless of whether the defendant is a state—would have to bring the case under some cause of action other than infringement. This second hurdle to bringing an intellectual property infringement action against a state creates two problems for the property owner. First, he or she must pursue a cause of action that the court will recognize as appropriate and that is capable of providing the relief the property owner is seeking. Second, the claim must not be such that the court will find the suit is, in effect, an infringement action and dismiss it for lack of jurisdiction. Many of the state officials and representatives of the intellectual property community we contacted provided a number of possible causes of action that intellectual property owners might pursue in state court. One option that was posited, for example, was a “taking” under a reverse eminent domain, or “inverse condemnation” theory. Under this cause of action, the intellectual property owner would claim that the state had “taken” the property—much as it takes real property for road right-of-way or construction projects—and that the property owner is entitled to just compensation as provided by the Fifth Amendment to the U.S. Constitution. One of the potential problems with this cause of action is that it generally has been applied in the context of real estate or other tangible property rather than to intangible property such as patents and copyrights. Another suggested cause of action was breach of contract. Under this theory, the intellectual property owner would argue that the state was not abiding by the terms of an agreement between the state and the property owner. A potential problem with this cause of action is that it requires the court to find that a contract existed between the parties. Also, any damages awarded may be limited to those provided by the contract. A third cause of action noted as possible was some type of tort action against the state for injury or damages caused by the state’s unauthorized use of the intellectual property. One of the problems seen with pursuing a tort cause of action is the property owner would in essence be bringing the same type of case that would be brought in an infringement action. Thus, even though the legal theory might be one that was appropriate and could result in compensation for damages, a state court might dismiss it for lack of jurisdiction because of federal judicial preemption. In the surveys we sent to state attorneys general and bar associations, we asked for opinions on alternative legal theories that might be pursued in state court. Three of the 36 attorneys general that returned our surveys said there was no theory under which a property owner could obtain damages and 20 had no opinion. Of the 13 that advanced one or more theories, the most common were a taking, such as reverse eminent domain, tort, and contract. Other theories included an action before a state claims commission or board, unfair competition, conversion, and trespass to chattel (see app. III, table 21). Seventeen of the 21 bar associations that returned our surveys advanced at least one theory for a state cause of action for state infringement of intellectual property, while 2 said no theory was applicable and 2 had no opinion. As with the attorneys general, the most common causes of action suggested were a taking, such as reverse eminent domain, tort, or contract. Other suggestions included criminal law, trade secret misappropriation, and unfair competition (see app. III, table 44). We also asked the state attorneys general and bar associations whether they believed damages could be recovered against their states if a property owner could obtain a judgment against the state in state court for unauthorized use of intellectual property. Of the 36 attorneys general that returned our surveys, 5 said damages definitely would be allowed, 6 said they probably would be allowed, 1 said recovery was as likely as not, 3 said damages probably would not be allowed, 1 said that they definitely would not be allowed, 17 had no opinion, and 3 did not respond to the question (see app. III, table 22). Of the 21 bar associations that returned our surveys, 1 believed damages definitely would be allowed, 8 said they probably would be allowed, 1 said recovery was as likely as not, 2 said damages probably would not be allowed, and 9 had no opinion (see app. III, table 45). Many of the state officials and representatives of the intellectual property community we contacted noted that the use of state-recognized causes of action in patent and copyright cases was unproven and speculative. They said that (1) there is little or no experience with pursuing these causes of action in intellectual property cases, (2) the appropriateness and applicability of such causes of action might vary state-by-state, and (3) the likelihood of success of such causes of action can not be known until decisions involving their use in intellectual property cases have been reviewed by the appellate courts. Some members of the intellectual property community also noted that, even if these causes of action were successful, they would not necessarily allow recoveries similar to those in federal court. They pointed out, for example, that federal copyright law provides for statutory damages for infringement. In state court, the property owner might have to prove actual damages. Also, states would differ in how infringement cases would be brought in state court, requiring the intellectual property owners and attorneys to be familiar with multiple jurisdictions. Few lawsuits accusing the states of the unauthorized use of intellectual property appear to have been brought in state court. To determine the legal theories that have been used in the past in such cases, however, we analyzed each of the 11 intellectual property cases we identified above as having been brought in state court since January 1985. Table 2 shows the causes of action pursued and the results achieved in each of these cases. Overall, these cases appear to do little to determine the availability of state causes of action for unauthorized use of intellectual property by states. We identified 11 cases in total, and these involved only 7 different states. Of the 11 cases, 4 were decided by the courts, while 4 were settled by the parties. Another three cases remain active, but in only one of these has a state appellate court ruled that the case can proceed under the state- recognized cause of action—a taking without just compensation—pursued by the plaintiff. A third hurdle to bringing an infringement action in state court against a state is the state’s governmental immunity in its own courts. This type of immunity differs from Eleventh Amendment immunity in that, within state law, the state is sovereign and usually cannot be sued unless it has given its permission to be sued. State law varies from state to state on the issue of governmental immunity depending on each state’s constitution, specific statutes, or judicial interpretation. Eight of the 36 attorneys general who responded to the surveys said that state governmental immunity would be an impediment to state court infringement actions. Three others saw state law as an impediment, and one said the case law was not developed in this area. Two attorneys general saw no impediments. Not all of the attorneys general responded to the question on impediments (see app. III, table 20). The state bar representatives also saw state governmental immunity as a problem in suing a state for infringement in its own courts. Thirteen of the 21 bar associations that responded to our surveys said state governmental immunity would be an impediment to suing their states for infringement in state court. In addition, two bar associations saw state law and one saw federal case law as impediments to bringing such suits. Only one bar association said there were no impediments, while two said they did not know. Like the attorneys general, not all bar associations responded to the question on impediments (see app. III, table 43). The ability to sue a state in its own courts varies among the states. A Washington official, for example, said the state allows suits for contracts, takings, and torts against the state in its own courts. In Texas, on the other hand, officials said that, in most cases, a plaintiff would have to obtain approval from the state legislature in order to sue the state and be paid damages. In still other cases, the states have given approval to being sued by establishing special courts that will hear actions against the state. New York, for example, has established a Court of Claims that can hear claims against the state. New York law limits such actions, however, to those cases where the state was performing a ministerial, as opposed to a protected discretionary, function. The intellectual property community is divided on what states should and could do, if anything, to protect the rights of intellectual property owners against state infringement after Florida Prepaid. Some state officials say that nothing more needs to be done because there is no demonstrated problem, as evidenced by the small number of infringement accusations made against them in the past and their willingness to investigate and take corrective action when they are made aware of a potentially infringing use. They also note that, if intellectual property owners are not satisfied with the states’ response to accusations of infringement, they can still obtain a federal injunction or pursue a lawsuit for damages in state court under some state-recognized cause of action. They say that, if the state remedies are considered inadequate, the blame lies not with the states but with the federal government, which preempts state courts from hearing patent and copyright infringement cases. They see no reason for new federal legislation—except possibly for the removal of federal judicial preemption—saying that state immunity is an inherent right of the states that provides an important defense against groundless lawsuits. Others in the intellectual property community we contacted say that, while it is true there has not been a substantial number of cases of infringement by the states, this is because the states previously were of the opinion they could be sued for damages in federal court—a situation that no longer exists. They point to what they see as the essential unfairness of a state’s being able to sue others but not being subject to suits themselves. An injunction in federal court is not an answer, they say, because it would not result in an award of damages and the litigation necessary to obtain the injunction could itself be expensive and protracted. They do not see the state courts as a viable alternative because of federal preemption and the lack of proven state causes of action. Some members of the intellectual property community believe additional federal legislation is needed. The proposals range from again attempting to take away a state’s right to Eleventh Amendment immunity in intellectual property suits—seen as unlikely in view of the Florida Prepaid decision— to requiring a state to waive immunity in return for the right to own intellectual property, protect those rights in federal court, or receive certain federal funds or benefits. Some of the state officials we contacted said there was no reason for intellectual property owners to be overly concerned about the Florida Prepaid decision. They said that states had not engaged in a pattern of infringement in the past—as evidenced by the small number of lawsuits that had been brought against the states and the even smaller number that had been successful—and that states were not likely to commit more infringements now just because they knew they could not be sued for damages in federal court. Some state officials we contacted noted that the states have strong policy motivations not to commit intellectual property infringement, as they are governmental authorities committed to protecting and preserving the rights of their citizens. In this regard, some officials from state institutions of higher education pointed to internal and state policies that prohibit employees and students from making unauthorized use of privately held property. They said that, as both major users and owners of intellectual property, the institutions are familiar with the laws governing the use of intellectual property and spend considerable effort ensuring that employees and students are aware of the allowable uses, obtain necessary approval and licenses, etc. Moreover, because the institutions are in the position of having to defend their own properties against infringement, the officials said they are closely attuned to the need to avoid the additional time and resources necessary to litigate or otherwise resolve potential cases of infringement. One example of how states say they have reacted to the Florida Prepaid decision was provided by an attorney from a state attorney general’s office. This attorney said that his office had received inquiries concerning whether the state still needed to obtain licenses to use the intellectual property of others. He said that his office responded that nothing had changed, that the state intended to abide by the intellectual property laws, and that state entities would need to continue doing what was necessary to ensure that they do not commit infringement. This attorney, who had successfully argued an Eleventh Amendment defense in a federal suit against a state institution of higher education, said that he believed the states actually have an even higher interest in not infringing after Florida Prepaid. He noted that the Supreme Court had based its decision largely on the states’ not having committed a substantial number of infringements in the past and that, if they now began to commit such infringements, the Congress would have a basis for pursuing new legislation to abrogate Eleventh Amendment immunity. The state officials also noted that the scope of the Eleventh Amendment is relatively narrow. As discussed above, for example, certain state institutions of higher education may not qualify for Eleventh Amendment immunity because of the way they are funded or organized within the state. Also, many of the attorneys general, state institutions of higher education, and bar associations that responded to our surveys pointed out that immunity under the Eleventh Amendment was not available to such state-related entities and instrumentalities as counties and municipalities, associations and foundations affiliated with state universities, certain state employees, and others within their states (see app. III, tables 12, 30, and 35). Similarly, the state officials noted that the state’s business often was carried out through contractors and licensees and that these entities could be sued in federal court if they committed infringement. Some state officials also said Florida Prepaid did not present a problem because proper safeguards are in place to protect intellectual property owners even in those cases where the state may have infringed. For example, officials from the state institutions of higher education pointed to their procedures, as discussed above, for investigating any accusation made against the institutions. They said these procedures were intended to ensure that the institutions abide by the law, fulfill their contractual obligations, and take corrective actions—such as ceasing the infringing use, obtaining a license or other permission, or reaching some type of monetary settlement—whenever potentially infringing uses are identified. If the property owner was not satisfied with the state’s response, he or she could still (1) seek an injunction against an infringing state official in federal court or (2) attempt a lawsuit in state court. Some state officials also said that any inability to bring an infringement action in state court is the fault of the federal government, not the states, and should not be used as a reason for abrogating the states’ rights to Eleventh Amendment immunity from lawsuits in federal court. They said that, if the federal government wants to consider new legislation concerning Eleventh Amendment immunity, it may wish to consider revoking the federal judicial preemption law and allowing the state courts and legislatures to develop remedies of their own. Some members of the intellectual property community agree with the states that there may be no heightened risks of state infringement after Florida Prepaid. Their primary argument is that the number of past cases of state infringement has been so few. However, they also point to policy reasons. An article published in June 2000 by Peter S. Menell, Professor of Law at the University of California at Berkeley and Director of the Berkeley Center for Law and Technology, discussed some of the policy and practical reasons that state infringements may not increase.Professor Menell noted that the states were subject to social, bureaucratic, and economic constraints that would discourage them from infringing. Furthermore, Professor Menell said that property owners might be able to take certain actions on their own—such as establishing formal contractual relationships with state entities or choosing to limit access through trade secrecy or encryption. When asked why they need Eleventh Amendment immunity from intellectual property lawsuits in federal court if they do not infringe, some state officials said that immunity can act as a hedge against frivolous or meritless lawsuits. Moreover, they said that, if the states had already investigated the complaints and taken the necessary action, there was no need to be drawn into expensive and time-consuming lawsuits with persons who did not understand the intellectual property laws or refused to believe the states had not infringed. Other members of the intellectual property community believe that the Florida Prepaid decision does create problems, pointing to what they say is the unfairness of the current situation and the significant risks that intellectual property owners face. They consider the situation to be unfair because states can own federally protected intellectual property and sue infringers in federal court but cannot be sued for infringement themselves. They believe the risks are significant because the state can infringe the intellectual property of others with impunity. “We view the present, post-Florida Prepaid situation as very inequitable. States and state institutions are active participants in the federal intellectual property system, with extensive patent and trademark holdings. Yet, while they enjoy all the rights of an intellectual property plaintiff, they are shielded from significant financial liability as intellectual property defendants.” At the same hearing, the Register of Copyrights noted that the states are among the most significant holders and users of copyrights. She referred to the current state of affairs as “unjust and unacceptable.” She also said that “It is only logical that in the current legal environment, without an alteration to the status quo, infringements by States are likely to increase.” Many of the intellectual property community representatives we contacted agreed with these views. While they acknowledged that there had been few infringement lawsuits against states in the past, they also believed that the small number of such lawsuits in the record before the Supreme Court did not accurately portray the actual number or significance of accusations that had been made against the states. In this regard, they noted that (1) the record before the Supreme Court was not a complete analysis of the lawsuits that had been filed against the states; (2) the record also did not consider matters dealt with out of court, which are believed to be more numerous than those resolved through lawsuits; (3) even if accusations of infringement are few in number, they can be quite significant to the intellectual property owners involved; and (4) infringement lawsuits may be few, but they are complicated and can be quite expensive to both plaintiffs and defendants. The intellectual property community representatives said that, in the past, the states considered themselves to be subject to infringement suits in federal court and had an incentive not to infringe the intellectual properties of others. They questioned whether the states would be as cautious now, knowing that they cannot be sued for damages. The representatives said that of particular concern were matters such as those the states might have resolved administratively in the past. If the state so chooses, the state can refuse to do anything, with the only threat being that the property owner might wage an expensive and protracted trial in federal court to obtain an injunction or in state court with the hope that the court would award damages under some as-yet-unproven state law theory. The intellectual property community also is concerned with the effect of the Florida Prepaid decision on international relations in the area of intellectual property. In his July 2000 testimony before the House Subcommittee on Courts and Intellectual Property, the Director of the USPTO noted that it would be difficult for the United States to promote the enforcement of intellectual property rights worldwide if states could not be sued in federal court for infringement. The Director said that “When we criticize another country for having financial penalties against patent, trademark, and copyright infringers that are too low, that country may point out that we have no financial penalties at all when the infringer is a state university, hospital, prison, or government office.” Some representatives of the intellectual property community believe that federal legislation is required to resolve the problems they say have been created by the Florida Prepaid decision. Generally, they would prefer legislation similar to the law abrogating Eleventh Amendment immunity in patent cases that was struck down by the decision. They anticipated, however, that any such legislation would have problems surviving Supreme Court review unless the Congress can create a record showing a pattern of infringement accusations against the state and an absence of state remedies. Members of the intellectual property community offered other legislative alternatives. One noted, for example, that state immunity could be abrogated through an amendment to the U.S. Constitution. However, he also believed that this was unlikely to happen because, even if the members of Congress could agree on such an amendment, the states would have no incentive to ratify it. Other members of the intellectual property community believed that federal legislation offering or requiring some type of waiver of immunity by the states might resolve the issue. Since states would not have an incentive to waive immunity on their own, federal law would have to provide the incentive. Some of the options presented were as follows: The waiver could be tied to the federal grant of intellectual property rights. Under this scenario, the state would have to agree to waive its right to claim Eleventh Amendment immunity if sued for infringement in order for the state to be granted or otherwise own federal patents, trademarks, or copyrights. The waiver could be tied to the right to sue in federal court. Under this scenario, the state would not have the right to sue a party for infringement of its own intellectual property in federal court unless the state had previously waived its Eleventh Amendment right not to be sued in federal court by others. The waiver could be tied to the receipt of federal funds. Under this scenario, a state would waive its right to claim Eleventh Amendment immunity if sued in federal court as a condition for receiving certain federal funds. One such conditional waiver, for example, might be under the Patent and Trademark Laws Amendments of 1980, as amended (commonly known as the Bayh-Dole Act), where certain federal contractors and grantees are allowed to retain ownership of and profit from inventions created through federally funded research projects. Another suggestion was made that would tie waivers in copyright suits to federal library grants. In the July 2000 hearing before the House Subcommittee on Courts and Intellectual Property, the Director of the USPTO and the Register of Copyrights discussed potential legislation to require state waiver of immunity under the Eleventh Amendment in exchange for some federal grant of right or funding. Two other options discussed in the hearing were (1) giving the government the right to sue the infringer on behalf of the property owner and (2) providing statutory authority to sue an infringing state official. The legislation allowing the government to sue on behalf of the property owner would prevent the state from claiming immunity under the Eleventh Amendment, since the federal government is not a “person” within the meaning of the Amendment. Legislation setting out the right to obtain a federal injunction against an infringing state official was seen as adding credibility to the injunction’s being a viable alternative in federal court for a property owner seeking a remedy against a state. If the Congress decides that legislation is needed to allow states to be sued for intellectual property infringement, the Congress may also want to make clear that states are treated as being capable of committing infringement of federally protected intellectual property. The Florida Prepaid decision has left this unclear. As discussed above, the Congress amended the patent, copyright, and trademark laws in the early 1990s after some states began seeking Eleventh Amendment immunity from infringement lawsuits and the Supreme Court ruled in 1985 that an unequivocal expression of congressional intent was required to abrogate state immunity. In the clarification acts that followed, the Congress added (1) language that made it clear that states are among those that are capable of committing patent, trademark, and copyright infringement and (2) provisions that stated an explicit intent to eliminate states’ immunity from suit in federal court for such infringement. In Florida Prepaid, the court held that the Patent and Plant Variety Protection Remedy Clarification Act could not be sustained. The act did not contain a saving clause. Thus, all clarifying provisions—including those expressing the Congress’ intent that states are subject to being infringers of federally protected intellectual property—may have been lost. Although the state officials and representatives of the intellectual property community did not raise this issue, allowing infringement lawsuits against states would seem to be of little value if the states are not capable of committing infringement. It is too early to determine what impact the Florida Prepaid decision will have on the federal intellectual property system. Relatively few accusations of infringement against states appear to have been made in the past, and there is no way to ascertain whether the states will be less diligent now that they know they cannot be sued for damages in federal court. At the same time, however, the incidence of overall infringements has little meaning to an intellectual property owner concerned that his or her individual property is at risk. Moreover, few proven alternatives or remedies appear to be available to a property owner when a state does commit infringement—particularly if patent and copyright infringement suits cannot be brought in state court—and any compensation for damages may fall short of what the property owner might have achieved previously. The intellectual property community, which includes states, is divided on what, if anything, needs to be done to resolve the issues raised by the Florida Prepaid decision. Generally, the states see no reason to do anything, since there has been no pattern of infringement in the past. Others in the intellectual property community disagree and would like the Congress to pass legislation similar to that in effect prior to the Florida Prepaid decision. Some have proposed requiring the states to waive their Eleventh Amendment immunity in exchange for rights received under the federal intellectual property system or to receive certain federal funds. If the Congress does consider legislation, it may want to clarify that states are subject to federal intellectual property law and, as such, are still capable of committing infringement. We provided the Copyright Office and the USPTO with a draft of this report for their review and comment. Both the Copyright Office and the USPTO agree that it is too early to determine the impact of the Florida Prepaid decision. The Copyright Office concurred with our findings that there were few examples of states being accused of intellectual property infringement, noting that until recently states “had good reason to believe they were subject to the full range of remedies if they infringed a copyright.” The Copyright Office also noted, however, that the states may no longer feel so constrained and that the “behavior of tate employees with regard to the use of intellectual property is only just beginning to evolve.” In addition, the Copyright Office said that, while the states and their employees generally are law-abiding, it nevertheless was concerned that the legal remedies available after Florida Prepaid were insufficient to ensure that the states would respect the copyright laws. Thus, the Copyright Office believed that Congress should “consider other legislative responses, such as providing incentives to tates to waive their immunity voluntarily by conditioning the receipt of a gratuity from the Federal Government on such waiver.” The USPTO commented that our report is accurate in stating that the intellectual property community is concerned over the decision in Florida Prepaid and what it sees as an inequitable situation. The USPTO said the inequity “skews our system of intellectual property protection, because the penalties in place to discourage infringement do not apply to state entities.” However, the USPTO said that our finding that “infringement accusations against states have been few” does not mean “a pattern of infringement does not exist.” The USPTO noted that (1) 58 lawsuits “seems like a substantial number” given that “state entities constitute only a tiny fraction of the total number of parties using intellectual property” and (2) many more accusations against states are handled through administrative processes and never reach court. The USPTO also expressed a concern that we based many of our conclusions on “anecdotal evidence” provided by state attorneys general and institutions of higher education that “may have an incentive to under-report accusations made against state entities.” In addition, the USPTO said that there was no division within the intellectual property community about what should and could be done to protect the rights of intellectual property owners except for a disagreement between the states, which it refers to as “a small subsection,” and the rest of the community. The USPTO said the report placed “disproportionate emphasis” on the views of state attorneys general and state institutions of higher education but gave “short shrift to responses from the intellectual property community.” The USPTO noted that it would be “more accurate to characterize the intellectual property community as strongly desiring a legislative solution to the perceived problem…but differing as to what statutory approach to take.” The USPTO also said that a legislative solution “seems especially appropriate given the absence of any viable alternative remedy against state infringement.” Regarding the USPTO’s comments about a pattern of infringement, we believe our characterization of the number of accusations identified as “few” is accurate. To put the 58 lawsuits in context, we show in the report that there were nearly 105,000 district court cases filed from fiscal year 1985 through fiscal year 2000 that involved protected property rights for patents, trademarks, and copyrights. We reach no conclusions as to whether these 58 lawsuits would or would not constitute a pattern of infringement. As to the USPTO’s point that many other accusations are handled administratively, we make this same point in our report and provide statistics from the states indicating that these are few in number also. The USPTO was concerned that we based many of our conclusions on “anecdotal evidence” provided by the states themselves. While it is true we obtained information from state attorneys general and state institutions of higher education through surveys, we note in the report that we used these as a “supplement” in identifying accusations of infringement. We also conducted an extensive analysis of the case law. Moreover, we sent surveys to each state bar association that had intellectual property sections and conducted site work in three states with extensive involvement in the intellectual property system. We also sought assistance from national associations representing intellectual property attorneys and attorneys general, as well as other attorneys and associations representing intellectual property owners. In addition, we sought and obtained input from both the USPTO and the Copyright Office. We do not offer any views on whether the positions taken by others are accurate. Regarding the USPTO’s comment that there is no division within the intellectual property community about what needs to be done to protect against state infringement, we disagree. The intellectual property community includes state officials, and we do not give a disproportionate emphasis to the views of state officials. Rather, we present a balanced discussion in our report by showing that (1) some state officials believe that nothing needs to be done, (2) others in the intellectual property community see potential problems, and (3) some in the intellectual property community believe federal legislation is needed. Again, we obtained views from all segments of the intellectual property community, of which the states are an integral part. Finally, we know of no statistics that would support the USPTO’s contention that the states comprise a “tiny fraction” of those who use intellectual property or “a small subsection of the community.” The USPTO and the Copyright Office do have some statistics on the states' ownership of intellectual property, and we include that information in appendix II of our report. The USPTO and Copyright Office comments are included in their entirety in appendix VI and appendix VII, respectively. We conducted our work from August 2000 through August 2001 in accordance with generally accepted government auditing standards. Appendix I contains the details of our scope and methodology. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this letter. At that time, we will send copies to the Chairman, Senate Committee on the Judiciary; the Chairman and Ranking Minority Member, Subcommittee on Courts, the Internet, and Intellectual Property, House Committee on the Judiciary; the Acting Under Secretary of Commerce for Intellectual Property and Acting Director of the United States Patent and Trademark Office; and the Register of Copyrights. The report is also available on GAO’s home page at http://www.gao.gov. If you have any questions about this report, please call me at (202) 512-3841. Key contributors to this report are listed in appendix VIII. As requested, we conducted a review of state Eleventh Amendment immunity in intellectual property infringement actions, focusing on issues raised by the U.S. Supreme Court’s June 1999 decision in Florida Prepaid Postsecondary Education Expense Board v. College Savings Bank, 527 U.S. 627 (1999). Our objectives were to (1) determine the extent to which states have been accused of intellectual property infringement, (2) identify the alternatives or remedies available to protect intellectual property owners against state infringement after the Florida Prepaid ruling, and (3) obtain the views of the intellectual property community on what states should and could do, if anything, to protect the rights of intellectual property owners against infringement. To identify past infringement accusations against the states, we searched for lawsuits as well as matters dealt with out of court that had been active since January 1, 1985. The year 1985 was chosen as a starting point because this was the year the Supreme Court ruled that, to abrogate Eleventh Amendment immunity, the Congress must make its intentions unmistakably clear in the language of the statute. In identifying lawsuits, we selected those for which there appeared to be some underlying accusation of infringement or unauthorized use of intellectual property, including declaratory judgment actions. In the case of multiple actions (an infringement lawsuit, a declaratory judgment, a motion to dismiss, etc.) on the same underlying dispute, we considered all such actions as part of the same case except instances where (1) the state was both a plaintiff and defendant in separate actions filed in one or more jurisdictions and (2) separate cases were filed in both federal and state court. While we focused primarily on lawsuits where the state was a defendant, we also obtained data on those lawsuits where the state was a plaintiff as a means to determine the extent to which they had taken advantage of the laws protecting intellectual property owners against infringement. In identifying matters dealt with out of court, we included any accusation where the underlying issue was the potentially unauthorized use of intellectual property. While we included formal accusations, such as those made through cease-and-desist letters, we also included less formal accusations, such as those made orally. Because we had to obtain all of the information on matters dealt with out of court from the states themselves, we did not ask for identification of individual accusations but rather on the range of all such accusations since January 1985. We used three methods to obtain information on lawsuits and matters dealt with administratively. First, we analyzed the case law from each of the 50 state court systems and the federal court system, using commercially-available legal databases. To do this, we searched for all cases in which an issue of infringement appeared to have been raised and one of the parties involved a state entity. We found that this method could not identify all accusations because some (1) lawsuits were dropped because they were abandoned or settled, (2) lawsuits were still active, (3) lawsuits had been decided by state trial courts, and (4) matters had been dealt with administratively, without a lawsuit being filed, are not included in published case law. Moreover, in some cases it was difficult to determine whether a party to a lawsuit was actually a state entity eligible for Eleventh Amendment immunity or whether there was an accusation of infringement in the underlying case. We supplemented our work on identifying accusations of infringement by sending surveys to state attorneys general and state institutions of higher education. We chose attorneys general because, as the chief legal representatives of the states, they would be in the best position to provide information on state law and matters that affect state entities. We chose state institutions of higher education because they tended to be the state entities most likely to own and use intellectual property. We sent surveys to the 50 state attorneys general and received responses from 36, or 72 percent, of them. The 14 attorneys general who did not respond to our surveys were from Alabama, California, Colorado, Idaho, Illinois, Missouri, New Jersey, North Carolina, Ohio, Pennsylvania, Tennessee, Utah, Virginia, and West Virginia. In the case of California, we did obtain information during a site visit that addressed some the issues covered by the survey, even though the attorney general did not return the survey. In identifying state institutions of higher education for participation in our survey, we concentrated on those that actually owned intellectual property. In this regard, the U.S. Patent and Trademark Office (USPTO) provided us with a listing of U.S. colleges, universities, and associations of colleges and universities that had utility patents in force as of December 31, 1999. “In force” patents are those for which the patent term has not expired and required maintenance fees have been paid. For purposes of our report, the term “state institutions of higher education” includes state colleges and universities and associations affiliated with such state colleges and universities. We reviewed the USPTO listing of 370 institutions and associations and, based on information available to us, eliminated all duplicates, private institutions, consortia, and publicly supported institutions where representatives told us that because of the way they were funded or their relationship to the state they did not qualify for Eleventh Amendment immunity. From the resulting universe of 150 institutions and associations, we mailed surveys to 140. We did not mail surveys to 10 institutions and associations because we could not determine whether or not they were publicly supported, and we did not have sufficient information to contact them. We received 99 completed surveys. These 99 completed surveys represented a total of 113 of the 140 institutions and associations since some of these entities pooled their responses. Because those survey responses covering more than one institution and/or association provided summary information for all institutions and/or associations being reported on, the results in this report are based on the 99 survey responses we received. We also received some information in the survey responses for institutions that were not in our universe. Our response rate was 81 percent of those who received our survey in the mail or 75 percent of the universe. We also gathered information on accusations of intellectual property infringement against the states during site visits to three states— California, Florida, and Texas. We judgmentally selected these states because they are among the largest owners and users of intellectual property, have significant case activity and legal precedents regarding intellectual property infringement or Eleventh Amendment immunity, and were known to have varying state laws on waiver of state governmental immunity and access to state courts. We interviewed assistant attorneys general and intellectual property attorneys, including members of the intellectual property sections of the state bar associations in Texas and California. The Florida Bar does not have a separate intellectual property section. In addition, we interviewed general counsels at the University of Texas, Texas A&M University, the University of Houston, the University of Florida, Florida State University, the University of South Florida, and the University of California. For each of the lawsuits that were identified through surveys and site visits, we attempted to obtain the necessary citations so that we could review the cases independently. To obtain a better perspective on the relationship of state intellectual property infringement lawsuits to all infringement lawsuits, we obtained statistical information from the Administrative Office of the U.S. Courts and reviewed guidelines and interviewed cognizant officials from the federal court system to determine how such cases are reported. To determine what alternatives and remedies that respondents believed were available after the Florida Prepaid decision, we included questions to this effect on the surveys to the attorneys general and, to a lesser extent, state institutions of higher education. We also sent separate surveys to the intellectual property law sections of the 37 state bar associations that had such sections. We chose intellectual property sections of state bar associations for surveys because we believed the attorneys who were members of these sections would be most knowledgeable in intellectual property law in their states and would be in a position to discuss the immunity issue as it affects potential plaintiffs in infringement suits against states. Of the 37 bar associations that received our surveys, 21 completed them in whole or in part and returned them to us. To obtain further information on alternatives and remedies, as well as on what the intellectual property community believes should and could be done to protect intellectual property owners, we relied on site visits, our review of published documentation, and discussions with other individuals and groups in the intellectual property community. For example, we met with legal scholars from state universities that had studied the Eleventh Amendment immunity and intellectual property issue and, in some cases, had testified before the Congress and published law review articles. We also discussed immunity issues with USPTO and Copyright Office officials, associations that focus on intellectual property issues (including the American Intellectual Property Law Association, the American Bar Association, and the International Trademark Association), intellectual property attorneys, and others (such as the National Association of Attorneys General and the Software & Information Industry Association). In addition, we reviewed testimony and related documentation on the issue of Eleventh Amendment immunity and intellectual property from a July 27, 2000, hearing before the Subcommittee on Courts and Intellectual Property, House Committee on the Judiciary; a special panel assembled by the USPTO in March 2000; and a workshop held by the National Academies of Science in April 2001. We also reviewed other documentation such as the briefs filed and decisions rendered in Florida Prepaid and related cases. We reviewed S.1835, a bill introduced on October 29, 1999, by Senator Patrick J. Leahy—then the Ranking Minority Member and now the Chairman of the Senate Committee on the Judiciary—proposing legislation that would have required states acquiring a patent, trademark, or copyright to waive their rights to immunity in federal court in an intellectual property infringement suit during the terms of these properties. This bill was not acted upon and expired at the end of the 106th Congress. We did not attempt to determine the effect this proposal could have had on the Eleventh Amendment immunity and intellectual property issue, as this was beyond the scope of our review. We did not independently verify the information contained in the survey responses, although we did check the citations provided to ensure that the cases or other legal references met the criteria we had established. We also analyzed and edited the surveys for internal consistency. We drew no conclusions about why some of our surveys were not returned, although we did make followup efforts to ensure the surveys were returned and the provided information was complete and to clarify certain information. To provide perspective on the states’ participation in the intellectual property system, we developed partial statistics on state ownership of federally issued or registered patents, trademarks, and copyrights. We were unable to develop a complete statistical database because (1) USPTO and the Copyright Office do not maintain their databases in such a way that these data can be readily extracted and (2) as discussed elsewhere in this report, it is not always possible to determine a state entity’s affiliation with a state for Eleventh Amendment purposes. The data that we did accumulate were developed as follows: The data on patents were developed by first having the USPTO provide a listing of U.S. colleges, universities, and associations of colleges and universities that had utility patents in force as of December 31, 1999. We selected from this list those entities that were state-supported, based on our analysis of the institutions’ web pages, other Internet sites on higher education, and the responses to our surveys. We included in our data only those patents issued. The data on trademarks resulted from our search of the USPTO’s trademark database to identify trademarks owned by those institutions identified as state-supported institutions of higher education. This process was similar to the process used to identify patents. We included statistics on trademarks registered as well as those pending because, unlike patents, such data are provided in USPTO’s publicly available databases. The statistics provided were as of February 2001. The Copyright Office provided the statistics on copyrights, using data taken from a detailed analysis of its own databases for use in congressional hearings. These statistics were provided by state, rather than by individual institution; thus, we could not compare the included institutions for each state with those identified in our patent and trademark analysis. We also did not independently verify the provided data. According to Copyright Office officials, the statistics do not include “serials” (newspapers, magazines, etc.). Only those copyrights registered with the Copyright Office between January 1, 1978, and December 31, 1999, are included in the statistics. We conducted our work from August 2000 through August 2001 in accordance with generally accepted government auditing standards. In addition to those named above, Carolyn Boyce, Bert Japikse, Gary Malavenda, Jonathan S. McMurray, Deborah Ortega, and Paul Rhodes made key contributions to this report. | Intellectual property--which includes federally granted patents, trademarks, and copyrights--is often owned or used by state governmental entities, such as public institutions of higher education. Until recently, state entities that made unauthorized use of, or "infringed," the intellectual property of others were subject to lawsuits in federal court. In 1999, however, the U.S. Supreme Court held that states were not subject to such suits, striking down a federal law that would have taken away a state's right to claim immunity under the Eleventh Amendment of the U.S. Constitution when sued in federal court for patent infringement. Some intellectual property owners are concerned that they no longer have adequate remedies if a state commits infringement. Although the precise number is difficult to determine, few accusations of intellectual property infringement appear to have been made against the states through either lawsuits or matters handled out of court. GAO identified 58 lawsuits that had been active since January 1985 in either a state or federal court in which a state was a defendant in an action involving the unauthorized use of intellectual property. Intellectual property owners appear to have few proven alternatives or remedies against state infringement available if they cannot sue the states for damages in federal court. States are not likely to waive their immunity voluntarily, and, in some cases, their own laws may prohibit them from doing so. The intellectual property community is divided on what, if anything, states should and could do to protect the rights of intellectual property owners against state infringement. |
Medicare covers about 40 million elderly (over 65 years old) and disabled beneficiaries. Individuals who are eligible for Medicare automatically receive Hospital Insurance, known as part A, which helps pay for inpatient hospital, skilled nursing facility, hospice, and certain home health services. A beneficiary generally pays no premium for this coverage unless the beneficiary or spouse has worked fewer than 40 quarters in his or her lifetime, but the beneficiary is liable for required deductibles, coinsurance, and copayment amounts. Medicare-eligible beneficiaries may elect to purchase Supplementary Medical Insurance, known as part B, which helps pay for certain physician, outpatient hospital, laboratory, and other services. Beneficiaries must pay a premium for part B coverage, which was $58.70 per month in 2003. Beneficiaries are also responsible for part B deductibles, coinsurance, and copayments. Table 1 summarizes the benefits covered and cost-sharing requirements for Medicare part A and part B. Many low-income Medicare beneficiaries who cannot afford to pay Medicare’s cost-sharing requirements receive assistance from Medicaid. For Medicare beneficiaries qualifying for full Medicaid benefits, state Medicaid programs pay for Medicare’s part A (if applicable) and part B cost-sharing requirements up to the Medicaid payment rate as well as for services that are not generally covered by Medicare, such as prescription drugs. To qualify for full Medicaid benefits, beneficiaries must meet their state’s eligibility criteria, which include income and asset requirements that vary by state. In most states, beneficiaries that qualify for Supplemental Security Income (SSI) automatically qualify for full Medicaid benefits. Other beneficiaries may qualify through one of several optional eligibility categories targeted to low-income beneficiaries, individuals with high medical costs, or those receiving care at home or in the community who otherwise would have been institutionalized. To assist low-income Medicare beneficiaries with their premium and cost- sharing obligations, Congress established several Medicare savings programs—the QMB, SLMB, QI, and QDWI programs. Under these programs, state Medicaid programs pay enrolled beneficiaries’ Medicare premiums. As a result, for QMB, SLMB and QI beneficiaries, Medicare part B premiums would not be deducted from their monthly SSA checks. The QMB program also pays Medicare deductibles and other cost-sharing requirements, thereby saving beneficiaries from having to make such payments. Beneficiaries eligible for Medicare savings programs can apply for and be determined to be eligible through their state Medicaid programs. Thirty-three states have agreements with SSA whereby SSA makes eligibility determinations for a state if beneficiaries are deemed eligible by SSA to receive SSI benefits. In the other 18 states, even if an individual is eligible to receive SSI benefits, an individual must file an application with the state or local Medicaid agency to be eligible. Beneficiaries qualifying for Medicare savings programs receive different levels of assistance depending on their income. See table 2 for eligibility criteria and benefits for each program. In 1998, Congress passed legislation specifically providing funding for SSA to evaluate ways to promote Medicare savings programs. In response, SSA conducted demonstration projects to explore the effects of using various approaches to increase participation in Medicare savings programs. In one of these demonstrations conducted in 1999 and 2000, SSA tested six models designed to increase awareness and reduce barriers to enrollment. The models were implemented at 20 sites in 10 states, as well as the entire state of Massachusetts. The models differed in the extent to which SSA was involved in outreach efforts beyond mailing the letters. For example, in the “application model,” SSA staff screened beneficiaries if they appeared to be eligible, completed applications, collected supporting documents, and forwarded the completed application form and supporting evidence to the state Medicaid agency for an eligibility determination. In the “peer assistance model,” Medicare beneficiaries contacted an AARP toll-free number and were screened for program eligibility by an AARP volunteer. Across all six models, SSA sent more than 700,000 letters informing low-income Medicare beneficiaries that they may be eligible for benefits under the Medicare savings programs. The enrollment rate for each model varied—ranging from an additional 7 enrollees per 1,000 letters to 26 enrollees per 1,000 letters—with the application model recording the highest enrollment rate and peer assistance recording the lowest. In 2000, Congress amended the Social Security Act, through BIPA, requiring the Commissioner of Social Security to notify eligible Medicare beneficiaries about assistance available from state Medicaid programs to help pay Medicare premiums and cost sharing. BIPA also required SSA to furnish each state Medicaid program with the names and addresses of individuals residing in the state that SSA determines may be eligible for the Medicare savings programs. SSA is required to update such information at least annually. In addition to SSA’s outreach efforts, CMS and individual states have engaged in efforts to increase enrollment in Medicare savings programs. Since fiscal year 2002, CMS has included increasing awareness of the Medicare savings programs as one of its Government Performance and Results Act (GPRA) goals. Specifically, CMS’s goal in fiscal year 2002 was to develop a baseline to measure awareness of Medicare savings programs and to set future targets for increasing awareness. CMS estimated that 11 percent of beneficiaries were aware of Medicare savings programs in 2002 and the goal was to increase this to 13 percent for fiscal year 2003. As part of its efforts to increase awareness, CMS has coordinated with states, SSA, and other organizations regarding various outreach efforts; provided information about Medicare savings programs in various CMS publications; and developed a variety of educational materials for targeted populations, including minorities. CMS efforts in increasing enrollment in earlier years included setting state-specific enrollment targets and measuring progress toward these enrollment targets; developing and disseminating training and outreach materials to the states, and sponsoring national and regional training workshops for a variety of stakeholders, including other federal and state agencies, health care providers, and community organizations; designing a model application for Medicare savings programs that states can consider adopting; and providing grant funding to state Medicaid agencies, state health insurance assistance programs, and national advocacy groups to test and promote innovative approaches to outreach. In 2001, CMS also contracted for a survey of states to identify activities undertaken to increase program enrollment and streamline administration of these programs. Some of the most common state efforts included allowing application by mail (49 states), eliminating in-person interviews (46 states), developing a shorter application form (43 states), and conducting outreach presentations at health fairs (34 states). Other state efforts identified by the survey included increasing awareness of the programs through outreach efforts such as direct mailings and other printed material, and public service announcements on radio, television, and in newspapers; providing training for employees and education for beneficiaries; developing partnerships with other entities, such as State Health Insurance Assistance programs and local agencies on aging, to enhance outreach efforts and promote issues and solutions involving the Medicare savings programs; eliminating potential barriers to enrollment such as streamlining the enrollment and renewal process and easing financial eligibility rules; supplementing program benefits with other benefits, such as prescription drug discount programs; and providing information targeting underserved populations, including minorities. In response to BIPA, SSA is conducting an annual outreach effort to help increase enrollment in Medicare savings programs. This outreach consists of a nationwide mailing campaign and data sharing with the states. SSA selected low-income Medicare beneficiaries to be sent an outreach letter if their incomes were below the income eligibility ceilings for the Medicare savings programs. From May through November 2002, SSA sent a total of 16.4 million outreach letters to persons potentially eligible for QMB, SLMB, and QI. Additionally, in late 2002, SSA sent about 53,000 letters to those potentially eligible for benefits under the QDWI program. Starting in 2003, SSA has targeted annual outreach letters to individuals newly eligible for Medicare as well as a subset of those who were sent outreach letters in 2002 but are still not enrolled. From June through October 2003, SSA sent outreach letters to 4.3 million of these beneficiaries. SSA intends to continue its outreach mailing annually to potentially eligible beneficiaries, including recipients who did not enroll after receiving earlier letters, as well as those whose income has declined, making them eligible for the program. In addition to sending outreach letters, in 2002 and 2003 SSA provided states with a data file that listed residents who were potentially eligible for benefits under the Medicare savings programs. SSA plans to continue sharing these data once a year with states. The data provided by SSA could be used by the states to coordinate their outreach with SSA’s or supplement SSA’s outreach efforts. For the 2002 mailing, SSA sent letters three times each week from May through November. Each time letters were mailed, SSA sent them to approximately 207,000 Medicare beneficiaries randomly selected from the 16.4 million beneficiaries who were identified as potentially eligible for QMB, SLMB, and QI. Letters were targeted to beneficiaries whose incomes from Social Security and certain other federal sources were less than 135 percent of the federal poverty level (FPL). Specifically, those selected to be sent the outreach letters were intended to meet the following three criteria: individuals and couples entitled to Medicare, or within 2 months of Medicare entitlement eligibility; individuals who were not currently receiving Medicare savings program benefits under a state Medicaid program or not already entitled to full Medicaid based on SSI participation; and individuals and couples whose combined Social Security income and Department of Veterans Affairs and federal civil service pensions fell below the program’s income eligibility ceiling. The letters provided information in English or Spanish about the Medicare savings programs, including state-specific asset guidelines and a state contact number. (See app. II for a sample 2002 outreach letter.) At the end of November 2002, SSA sent a separate mailing to about 53,000 disabled working adults who were potentially eligible for benefits under the QDWI program. Medicare beneficiaries who had sources of income other than Social Security—such as income from employment and public and private pensions—and whose incomes were above the programs’ eligibility thresholds were selected nonetheless to be sent the SSA outreach letter because SSA’s data systems do not collect information on these income sources. In addition, SSA’s records do not contain information about beneficiaries’ private assets, making it impossible for SSA to identify whether letter recipients had assets within their states’ Medicare savings programs’ eligibility limits—typically $4,000 for an individual and $6,000 for couples. In 2002, the Medicare Rights Center, a national health advocacy group for older adults and people with disabilities, sought a federal court order requiring SSA to resend 1.4 million letters to potentially eligible beneficiaries in Connecticut and New York to correct erroneous information on the asset limit for the QI program. The New York and Connecticut letters had incorrectly informed potential beneficiaries that only individuals with assets of less than $4,000 were eligible for the QI program, even though Connecticut and New York abolished the asset requirement for QI eligibility in 2001 and 2002, respectively. SSA agreed to resend the letters and the parties settled the case before trial. In addition to sending letters to potentially eligible low-income Medicare beneficiaries, in 2002 SSA provided all but six states with an electronic data file containing the names of all beneficiaries to whom it had sent letters in that state. The data file contained information that could assist states with outreach efforts, such as the name, address, Social Security number, date of birth, spouse’s name, and the basis for Medicare entitlement of each letter recipient. SSA is required to provide updated data to the states each year. For the June through October 2003 mailing, SSA sent a second round of letters to about 4.3 million potentially eligible low-income Medicare beneficiaries nationwide whom its records indicated might have met the QMB, SLMB, and QI income eligibility criteria and were not currently enrolled in Medicare savings programs. This mailing included beneficiaries who were newly eligible since the 2002 mailing, current Medicare beneficiaries who newly met the income criteria, and about one-fifth of the beneficiaries notified in 2002 who still met the mailing criteria but were not enrolled in a Medicare savings program. At the time we conducted our work, enrollment data for beneficiaries who were sent the letter in 2003 were not available. In contrast to the 2002 letter that provided state-specific eligibility criteria and a state-specific telephone number, the 2003 letter did not contain customized state information, but provided more general national information. The letter suggested that beneficiaries who may be eligible check the government list in their local telephone books for their local Medicaid contact or call the general 1-800-Medicare number that refers callers to state help lines, such as state or local medical assistance offices, social services, or welfare offices. SSA gave several reasons for not including state-specific information in the 2003 letter. One official indicated that there was additional cost to SSA to develop state-specific letters and therefore the agency did not tailor the letters for each state. CMS officials reported that a few states did not want to provide state-level contact numbers because eligibility and other Medicare savings program administrative matters were actually conducted at the county levels. Furthermore, in some cases, the telephone numbers states initially provided were changed shortly before the 2002 mailings were begun, creating additional need for SSA to coordinate with states in finalizing the letters. However, some state officials we interviewed expressed concern about the lack of state-specific information for the 2003 mailing. Their concern was that, given that most states had established mechanisms for responding to these inquiries for the larger 2002 mailing, not including state-specific criteria or contact information on the letter could make the letter less effective since it could be more difficult for beneficiaries to obtain direct assistance or applications for eligibility determinations. We estimate that SSA’s mailing from May through November 2002 to 16.4 million potentially eligible beneficiaries contributed to more than 74,000 additional beneficiaries enrolling in Medicare savings programs. Further, in the year following SSA’s mailing, nationwide enrollment in Medicare savings programs increased 2.4 to 2.9 percentage points over that in the 3 previous years. Certain demographic groups also had larger additional increases in enrollment following the 2002 SSA mailing. For example, beneficiaries less than 65 years old, persons with disabilities, racial and ethnic minorities, and residents in southern states experienced larger additional increases in enrollment. On the basis of our analysis of SSA’s Master Beneficiary Record (MBR), we estimate that, of the 16.4 million SSA letter recipients in 2002, an additional 74,000 beneficiaries (0.5 percent of letter recipients) enrolled in Medicare savings programs than would have likely enrolled without the mailing. To estimate this increased enrollment, we examined two cohorts of letter recipients—a cohort of 1.3 million beneficiaries who were sent the letters during the first six mailings in May 2002 and a baseline cohort of 1.3 million beneficiaries who were sent the letters during the last six mailings through November 2002. Because SSA sent the mailing to beneficiaries in a random order nationwide from May through November 2002, the only difference between the cohorts is the time at which the letters were sent to them. As a result, other factors that could influence enrollment patterns, such as demographic differences or other outreach efforts by CMS and the states, should affect the May and November cohorts similarly. We used the November 2002 cohort as a baseline to examine how the May 2002 cohort’s enrollment in Medicare savings programs was affected following SSA’s mailing. As shown in figure 1, by August 2002—3 months after the initial letters were sent in May 2002—the Medicare savings program enrollment for the May cohort began to increase faster than that of the November cohort, which was yet to have the SSA letter sent to them. While the cohorts were sent the SSA letters in May or November 2002, SSA officials reported that it typically takes about 3 months before enrollment is reported in the MBR. As of December 2002, more than 5,800 additional beneficiaries in the cohort of 1.3 million beneficiaries who were sent the letter in May had enrolled in Medicare savings programs compared with the November cohort, whose enrollment was not yet affected by the mailing. (See table 3.) This additional enrollment in the May cohort represents 0.5 percent of the letter recipients. Projecting the experience of the May cohort to the universe of the 16.4 million letter recipients results in an estimate of over 74,000 additional beneficiaries enrolling in Medicare savings programs as a result of the 2002 SSA mailing. Nationwide, CMS data showed that Medicare savings programs experienced an overall net increase in enrollment of 5.9 percent (341,069 individuals) from May 2002—the start of SSA’s mailing—to May 2003. This 5.9 percent increase was nearly double the 3.0 to 3.5 percent increases in the 3 previous years before SSA’s nationwide mailings. (See table 4.) These data suggest that SSA’s mailing helped to increase enrollment at a greater annual rate than in earlier years. Across the United States, letter recipients residing in the southern states had a 0.6 percent additional increase in enrollment following SSA’s mailing. This was more than residents in the Northeast, Midwest, and West, where the additional increase in enrollment was 0.4 percent. Thirty- five states had an additional increase in enrollment following the SSA mailing compared to the increase that would likely have occurred without the letter. Of the thirty-five states, the largest additional increase in enrollment following the SSA mailing occurred in Alabama, (2.9 percent), followed by Delaware (2.0 percent), and Mississippi (1.3 percent). While data from 13 other states showed an increase in enrollment following the SSA mailing, these increases were not statistically significant. Another three states showed a decrease in enrollment following the SSA mailing, but these changes also were not statistically significant. Appendix III provides the additional percentage change in enrollment following the 2002 SSA mailing for each state. Certain demographic groups also had higher additional increases in enrollment rates than the additional increase among all letter recipients. In comparison to the 0.5 percent additional increase in enrollment among all letter recipients, beneficiaries less than 65 years old and beneficiaries of any age who qualified for Medicare as a result of a disability each had a 0.8 percent additional increase in enrollment following SSA’s outreach. Also, minority beneficiaries, which based on SSA’s data categories include blacks or individuals of African origin, Asians and Pacific Islanders, and North American Indians or Eskimos, had a 0.7 percent additional increase in enrollment. Appendix IV provides data for all demographic groups that we examined. The percentage of additional letter recipients newly enrolling in Medicare savings programs following SSA’s mailings varied significantly among the six states we reviewed. Among these six states, enrollment increases ranged from 0.3 to 2.9 percent. Further, several states we reviewed reported that calls to their telephone hot lines and applications mailed or received increased sharply during the period of the SSA outreach. In addition, some states supplemented SSA efforts with outreach efforts of their own, while other states were aware of or assisted outreach efforts by private or community groups. Among the states we reviewed, SSA’s outreach had varying effects on the percentage of letter recipients enrolling. Alabama, with 2.9 percent additional letter recipients enrolled compared to the percentage that likely would have enrolled without the SSA letter, had the largest additional increase in enrollment following the SSA mailing. This contrasts with the national average of 0.5 percent. For the states we reviewed, SSA’s outreach had the least impact on Medicare savings program enrollment in California, Washington, and New York with a 0.3 percent increase in additional enrollment. (See table 5.) The varying effects on enrollment by state can be attributed to several factors, including, the share of eligible beneficiaries already enrolled in Medicare savings programs prior to the outreach, a state’s ability to handle increased phone calls and applications, and a state’s income and asset limits. For example, a smaller share of low-income elderly beneficiaries in Alabama was enrolled in QMB as of the year prior to the SSA mailing than the national average. Specifically, the number of QMB enrollees in Alabama in 2001 was about half the number of Alabama seniors reported by the Census Bureau to have incomes below the limit for the QMB program. In contrast, about three-quarters of the seniors nationwide who reported income below the QMB limit were enrolled. As a result, a larger number of letter recipients in Alabama may have been able to meet the QMB and other Medicare savings program eligibility criteria whereas other states may have already enrolled a larger share of these beneficiaries. Further, each of the states we reviewed established or used an existing state-specific telephone number that was listed in the SSA letter to receive calls. After the SSA mailing started, however, California’s phone number was discontinued and calls were redirected to CMS’s nationwide 1-800- Medicare number. California’s lower enrollment could also result from its eligibility requirements for SSI. For example, in a prior demonstration, SSA’s mailing in 1999 and 2000 resulted in lower enrollment in California than in other demonstration sites, in part because the state offered a generous state supplement to SSI. Therefore, there were potentially not as many people eligible for the Medicare savings programs. In addition, other state differences, such as different state asset eligibility requirements and application requirements as well as state efforts to support the SSA outreach, may have contributed to different effects among states. States we reviewed often reported that calls to their hot lines and applications for Medicare savings programs increased significantly during the period of the 2002 SSA mailing. Four states provided data on the monthly trends in the number of calls either related to Medicare and Medicaid in general or the Medicare savings program specifically that showed increases concurrent with the 2002 SSA mailing. Three states were also able to provide data on changes in the number of applications sent to interested beneficiaries or received from beneficiaries. (See table 6.) While officials in several states indicated that not all of the increases noted could be attributed directly to the SSA mailing, the data provided by the states suggest that beneficiaries’ interest in Medicare savings programs increased during the mailing period. For example, Alabama experienced a 19 percent increase in monthly calls to its state hot line related to any Medicare and Medicaid issue after the SSA mailings began; this was followed by a 25 percent decrease after the mailings ended. Alabama also experienced a 158 percent surge in applications received per month during the SSA mailing and then a decrease of 57 percent afterwards. State officials reported that Washington tracked calls and applications specific to the SSA mailing, and these data showed 85 percent decreases in both monthly call volume and applications mailed out to beneficiaries after the mailings ended; Washington also reported a 72 percent monthly decrease in applications received after the 2002 mailings ended. Concurrent with SSA’s mailing, each of the states we reviewed reported that the state or other stakeholders conducted additional outreach. For example, the Louisiana Department of Health and Hospitals and the Pennsylvania Health Law Project, a coalition advocating for low-income individuals and the disabled, each received 3-year grants from the Robert Wood Johnson Foundation in 2002 to conduct outreach to low-income Medicare beneficiaries in these states. A state official also reported that in 2002 the New York Department of Health developed and distributed 100,000 copies of a brochure called “How To Protect Your Health and Money,” which included information about the Medicare savings programs, and conducted a “Senior Day” at 16 sites in New York City and several other districts as well as presentations at local fairs. Other states reported coordinating with community or state organizations as well as private health plans participating in Medicare, such as health maintenance organizations participating in the Medicare + Choice program. Some private health plans conducted outreach to increase Medicare savings program enrollment since CMS pays these plans a higher rate for these enrollees. Several state officials also said that their states work with other groups, such as the local departments of aging or senior services and local businesses and community organizations, to assist with outreach efforts to potentially eligible beneficiaries. None of the states we reviewed reported having assessed the effectiveness of their outreach efforts. Of the six states we reviewed, only Louisiana and Pennsylvania officials reported that they used the data file listing names and addresses of potentially eligible beneficiaries provided by SSA in 2002 to assist with state outreach or enrollment efforts. For example, after receiving the SSA data file, seven parishes in Louisiana used it to obtain a list of potentially eligible beneficiaries and sent an application with a letter and return envelope to these beneficiaries. In 2003, about 20,450 applications were mailed to potential beneficiaries. Pennsylvania officials used the file to cross-check against the state’s own data system to assess the number of applications authorized, rejected, or denied as a result of the SSA mailing. We provided a draft of this report to SSA, CMS, and state Medicaid agencies in Alabama, California, Louisiana, New York, Pennsylvania, and Washington. In written comments, SSA generally concurred with our findings and provided technical comments that we incorporated as appropriate. SSA also noted that improvements in state enrollment processes could further increase enrollment. SSA’s comments are reprinted in appendix V. In a written response, CMS stated it did not have any specific comments on the report. However, CMS provided technical comments that we incorporated as appropriate. While we did not examine the effects of SSA’s 2003 mailing, Louisiana Medicaid officials indicated that, in comparison to the 2002 SSA mailing, there was little increase in call volume following SSA’s 2003 mailing, and that they believe that this was because a state-specific telephone number was not included in the 2003 outreach letter. New York Medicaid officials stated that they found an increase in Medicare savings program enrollment of over 6 percent from December 2002 to December 2003. However, in addition to being a different timeframe from what we examined, we do not believe that all of this increase can be attributed to the SSA mailing. Based on our analysis of SSA’s MBR data, we report a 0.3 percent increase in enrollment in New York specifically attributable to the 2002 SSA outreach mailing. We found the net increase in enrollment from May 2002 to May 2003 (following SSA’s 2002 mailing) to be 5.9 percent nationwide, similar to the net increase in enrollment that New York reported from December 2002 to December 2003. Louisiana and Pennsylvania Medicaid officials also provided technical comments that we incorporated as appropriate. Alabama, California, and Washington Medicaid officials reviewed the draft and stated that the report accurately reflected information relevant to their respective states. We are sending copies of this report to the Commissioner of SSA, the Administrator of CMS, and other interested parties. We will also provide copies to others on request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please call me at (202) 512-7118 or John Dicken at (202) 512-7043 if you have any additional questions. N. Rotimi Adebonojo and Rashmi Agarwal were major contributors to this report. To determine what outreach the Social Security Administration (SSA) conducted in response to the statutory requirement, we obtained and reviewed copies of SSA documents, including sample 2002 and 2003 outreach letters and data on the number of letters sent to eligible Medicare beneficiaries in each state, as well as reports prepared by the Centers for Medicare & Medicaid Services (CMS) related to the Medicare savings program. In addition, we interviewed officials from the SSA and CMS. To determine how enrollment changed following SSA’s outreach, we analyzed records from SSA’s Master Beneficiary Record (MBR)—a database that contains the administrative records of Social Security beneficiaries, including payments for Medicare premiums—and CMS’s national enrollment data for the Medicare savings programs. The MBR data contain demographic information as well as information on the monthly deductions made from beneficiaries’ Social Security checks to cover Medicare part B premiums. We obtained MBR data on beneficiaries who were sent the outreach letters in the first six mailings in May and the last six mailings through November 2002, representing 2.6 million of the 16.4 million Social Security beneficiaries who were sent letters from SSA. To determine which letter recipients enrolled in the Medicare savings programs following SSA’s 2002 mailing, we identified letter recipients who met the following criteria: those whose date of eligibility for Medicare savings programs began January 2002 or afterwards; those for whom a third-party payer, specifically a state, made payments on their behalf to cover Medicare part B premiums; and those who no longer had the premium deduction made from their Social Security checks to cover Medicare part B premiums at any point from June 2002 through December 2002. In order to estimate the impact of the SSA outreach mailing on additional enrollment in Medicare savings programs, we analyzed monthly enrollment from June 2002 to December 2002 for two cohorts of letter recipients to identify letter recipients who enrolled in Medicare savings programs following the initiation of the SSA mailing in May 2002. Because the mailings were sent to beneficiaries in a random order, the only notable difference between the recipients in the two cohorts would be the timing of when the SSA letters were sent to them. SSA officials noted that it typically takes about 3 months until enrollment is reported on the MBR. Therefore, since the mailings began in May 2002, the first effects of the mailing would not have been apparent until after June 2002. We analyzed the MBR data provided by SSA to determine specifically what month and year a letter recipient enrolled in Medicare savings programs. Using the enrollment by the November cohort as a baseline because these individuals met the same selection criteria as those in the May cohort, we estimated the net effect of the SSA mailing by comparing the difference in cumulative monthly enrollment between the May and November cohorts in December 2002—this difference represented the additional enrollment we attributed to the SSA mailing. We made the comparison in December 2002 because after this date the enrollment of the baseline group began increasing at a rate faster than the May cohort, indicating that this was the point when the largest cumulative difference in enrollment between the two cohorts occurred before the effects of the mailing started becoming evident for the November cohort. Using the same methodology, we calculated the effect of the SSA outreach letter for certain demographic groups and for beneficiaries in each state. We also obtained and analyzed data contained in CMS’s third party master file for the period May 1999 to May 2003 that tracks national Medicare savings programs enrollment. Using these data, we examined how national Medicare savings enrollment trends compared before and after the 2002 SSA mailing. To determine how additional enrollment in the programs changed in selected states following SSA’s outreach and what outreach efforts these states undertook, we interviewed Medicaid officials in six states— Alabama, California, Louisiana, New York, Pennsylvania, and Washington. We selected these states based on several factors, including states with different levels of change in overall Medicare savings programs enrollment from 2002 to 2003, geographic diversity, relatively large populations of Medicare savings programs enrollees, and availability of data on program enrollment. We also reviewed CMS’s third party master file to identify how many beneficiaries in each state were enrolled in Medicare savings programs, and analyzed records from SSA’s MBR to estimate the additional enrollment in each state following the SSA mailing. In addition, we obtained information from each state to the extent available on its involvement with the SSA mailing, the state’s specific eligibility criteria for its Medicare savings program, outreach efforts conducted by the state to low-income Medicare beneficiaries, and state data on call and application volume before, during, and after the SSA outreach. We obtained information from SSA and CMS on their data reliability checks and any known limitations on the data they provided us. SSA and CMS perform quality controls, such as data system edits, on the MBR and the third party beneficiary master file, respectively. We concluded that their data were sufficiently reliable for our analysis. A few MBR variables have certain limitations. For example, some Medicare beneficiaries receive their Social Security payments electronically, and therefore may not keep the record of their mailing address current. For our analysis we only used the beneficiary’s state of residence, which is less likely to change as SSA reported that, even if a beneficiary’s address changes, the beneficiary often stays within the same state of residence. Finally, since it is optional for beneficiaries to identify their race, a number of Social Security recipients do not. However, sufficient numbers of individuals reported their race to to allow us to analyze these data and also report missing or unknown values. SSA mailed 16.4 million letters in 2002 to potentially eligible Medicare beneficiaries notifying them about state Medicare savings programs. These letters were customized to include state-specific information, including a state contact number. These letters were sent in English or Spanish, depending on the beneficiary’s preference. Figure 2 provides a sample of the outreach letter sent to a beneficiary in Texas between May and November 2002. Figure 3 shows enrollment by state of the estimated 74,000 additional beneficiaries who enrolled in Medicare savings programs following the 2002 SSA mailing. Because these estimates are based on two cohorts of about 1.3 million beneficiaries each that represent a sample of the entire population of 16.4 million beneficiaries, we calculated 95 percent confidence intervals to reflect the potential for statistical error in projecting these estimates from the sample cohorts to the entire population. The small sample size in states with smaller populations results in larger confidence intervals for the estimates for these states. The highest additional increase in enrollment was in Alabama, in which an estimated 2.9 percent (with a 95 percent confidence interval of 2.6 percent to 3.3 percent) of beneficiaries who were sent the SSA letter enrolled than if the mailing had not occurred. In three states (Montana, Utah, and Vermont) our analysis showed no additional or slightly negative enrollment following the SSA mailing, and because the confidence intervals for these and 13 other states overlap the numeric value zero, the data do not show a statistically significant change in additional enrollment in the Medicare savings programs following the 2002 SSA mailing for these states. The other 35 states showed a statistically significant increase in additional enrollment in the Medicare savings programs following the 2002 SSA mailing. On the basis of our analysis of SSA’s MBR, we estimate that enrollment in Medicare savings programs was about 74,000 higher for Medicare beneficiaries following the 2002 SSA mailing than it would have been without the mailing. This represents about 0.5 percent of the 16.4 million letters sent nationwide. However, this additional enrollment following the SSA mailing varied among demographic groups. Figure 4 shows the additional enrollment in Medicare savings programs following the 2002 SSA mailing by geographic region and demographic groups, including racial categories, sex, disability status, and age categories. Because these estimates are based on two cohorts of about 1.3 million beneficiaries each that represent a sample of the entire population of 16.4 million beneficiaries, we calculated 95 percent confidence intervals to reflect the potential for statistical error in projecting these estimates from the sample cohorts to the entire population. Additional enrollment following the 2002 SSA mailing was statistically significantly higher among beneficiaries in southern states compared to other geographic regions, minorities compared to white beneficiaries, beneficiaries with disabilities compared to beneficiaries without disabilities, and beneficiaries who were younger than 65 years compared to those who were 65 years or older. Medicare and Medicaid: Implementing State Demonstrations for Dual Eligibles Has Proven Challenging. GAO/HEHS-00-94. Washington, D.C.: August 18, 2000. Low-Income Medicare Beneficiaries: Further Outreach and Administrative Simplification Could Increase Enrollment. GAO/HEHS- 99-61. Washington, D.C.: April 9, 1999. Medicare and Medicaid: Meeting Needs of Dual Eligibles Raises Difficult Cost and Care Issues. GAO/T-HEHS-97-119. Washington, D.C.: April 29, 1997. Medicare and Medicaid: Many Eligible People Not Enrolled in Qualified Medicare Beneficiary Program. GAO/HEHS-94-52. Washington, D.C.: January 20, 1994. | To assist low-income beneficiaries with their share of premiums and other out-of-pocket costs associated with Medicare, Congress has created four Medicare savings programs. Historic low enrollment in these programs has been attributed to several factors, including lack of awareness about the programs, and cumbersome eligibility determination and enrollment processes through state Medicaid programs. Concerned about this low enrollment, Congress passed legislation as part of the Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000 (BIPA) requiring the Social Security Administration (SSA) to notify low-income Medicare beneficiaries of their potential eligibility for Medicare savings programs. The statute also required GAO to study the impact of SSA's outreach effort. GAO examined what outreach SSA undertook to increase enrollment, how enrollment changed following SSA's 2002 outreach, and how enrollment changed in selected states following SSA's outreach and what additional outreach efforts these states undertook. GAO reviewed information obtained from SSA and the Centers for Medicare & Medicaid Services (CMS), analyzed enrollment data provided by SSA and CMS, and interviewed officials in and obtained data from six selected states (Alabama, California, Louisiana, New York, Pennsylvania, and Washington). In response to a statutory requirement, SSA is carrying out an annual outreach effort to help increase enrollment in Medicare savings programs. This outreach effort consists of mailing letters to potentially eligible lowincome beneficiaries nationwide as well as sharing data with states to assist with their supplemental outreach efforts. In 2002, SSA sent 16.4 million letters to low-income Medicare beneficiaries whose incomes from Social Security and certain other federal sources met the income eligibility criteria for Medicare savings programs. The 2002 letters provided eligibility criteria for programs in the beneficiary's home state and urged beneficiaries interested in enrolling to call a state telephone number provided. In addition to sending these letters, SSA provided states with a data file containing information on the beneficiaries to whom it sent letters. In 2003, SSA sent another 4.3 million letters to potentially eligible beneficiaries, and indicated that it intends to repeat the outreach mailing annually to newly eligible beneficiaries and a portion of prior letter recipients. Following SSA's outreach efforts in 2002, GAO estimated that more than 74,000 additional eligible beneficiaries enrolled in Medicare savings programs, 0.5 percent of all 2002 letter recipients, than would have likely enrolled without the letter. CMS enrollment data also showed that growth in Medicare savings programs enrollment for the year following SSA's mailing was nearly double that for each of the 3 prior years. Of the 74,000 additional enrollees, certain states and demographic groups had somewhat larger increases in enrollment than other groups. The highest additional enrollment increase was in Alabama, where 2.9 percent of letter recipients enrolled, followed by Delaware at 2.0 percent. Beneficiaries less than 65 years old, persons with disabilities, racial and ethnic minorities, and residents in southern states also had higher enrollment rates than other groups. The percentage of letter recipients newly enrolling in Medicare savings programs following SSA's 2002 mailing ranged from 0.3 to 2.9 percent among the six states GAO reviewed. The varying effects on enrollment by state could be attributable to several factors, including the share of eligible beneficiaries enrolled in Medicare savings programs prior to the outreach, each state's ability to handle increased call and application volume, and a state's income and asset limits. Four states GAO reviewed reported increases in the numbers of calls received or applications mailed or received following the SSA mailing and then decreases after the mailing period ended. Each of the states GAO reviewed reported that the state or other stakeholders conducted additional outreach during SSA's 2002 outreach. SSA generally agreed with GAO's findings. CMS stated that it did not have specific comments on the report. |
Compounded drugs may include sterile and nonsterile preparations, which, like all drug products, are made up of active and inactive ingredients. The active ingredient or ingredients in a compounded drug may be one or more FDA-approved products or may be bulk drug substances. Bulk drug substances—usually raw powders—are generally not approved by FDA for marketing in the United States. Examples of bulk drug substances that may be used to make compounded drugs include baclofen, a muscle relaxer, and gabapentin, an anticonvulsant, both of which may be compounded for use in topical pain medications. Active ingredients used to make a compounded drug—including bulk drug substances—are generally assigned national drug codes (NDC). FDA maintains a publicly available list of NDCs for FDA-approved products. NDCs for FDA-approved products and bulk drug substances are published in three national drug compendia, by First Databank, Medi- Span, and Truven Health Analytics. In addition, these compendia include drug pricing data by NDC, such as the average wholesale price (AWP) of FDA-approved products and bulk drug substances. A single FDA- approved product or bulk substance may be distributed by multiple manufacturers, in different forms or strengths, and by varying package sizes and, hence, may have multiple NDCs associated with it. The number of bulk drug substances that First Databank has added to its database—which First Databank tracks using NDCs—has increased significantly over the last 5 years, with the number of new NDCs added from 2009 through 2013 representing an increase of approximately 58 percent. (See fig. 1 for the number of NDCs for bulk drug substances that have been added to First Databank’s database from 2009 through 2013.) Under section 503A of the Federal Food, Drug, and Cosmetic Act (FDCA), a compounded drug is exempt from certain FDCA requirements, including new drug approval and certain labeling and current good manufacturing practice requirements, provided the compounded drug meets certain criteria. These criteria include that the drug is compounded by a pharmacist or physician based on a valid prescription for an identified individual patient or in limited quantities in anticipation of receiving a valid prescription based on historical prescribing patterns (known as anticipatory compounding). The Drug Quality and Security Act of 2013 amended certain FDCA provisions as they apply to the oversight of compounded drugs to clarify the applicability of section 503A nationwide and to create a category of outsourcing facilities involved in sterile drug compounding under section 503B. Outsourcing facilities that register with FDA and provide information to the agency about the products that are compounded at the facility can qualify for exemptions from the FDCA’s new drug approval and certain labeling requirements. Outsourcing facilities, however, must comply with current good manufacturing practice requirements. In addition, the Drug Quality and Security Act requires FDA to develop lists of bulk drug substances that may be used for compounding and lists of drugs that present demonstrable difficulties to compound, among others. To develop these lists, FDA has issued requests for nominations of bulk drug substances that pharmacists and outsourcing facilities may use to make compounded According to FDA, inclusion of a bulk drug substance on an FDA drugs.list does not indicate that FDA has approved the drug; rather, inclusion on the list means that a pharmacist or outsourcing facility may qualify for exemptions from certain requirements of the FDCA if they compound using bulk drug substances included on the lists. USP is a scientific nonprofit organization that sets standards for the identity, strength, quality, and purity of medicines, food ingredients, and dietary supplements. USP’s current suite of General Chapters for compounding includes, among others, Chapter 797 Pharmaceutical Compounding—Sterile Preparations, which provides procedures and requirements for compounding sterile preparations; and Chapter 795 Pharmaceutical Compounding—Nonsterile Preparations, which provides guidance on applying good compounding practices in the preparation of nonsterile compounded formulations for dispensing or administration to humans or animals. pharmacy transactions. These entities were required to be fully compliant with version D.0 by January 1, 2012. Medicare Part A, Medicare’s inpatient medical benefit, provides benefits for drugs administered in inpatient settings, such as hospitals. Medicare Part B, Medicare’s outpatient medical benefit, provides limited benefits for drugs administered to patients in outpatient settings, such as physician offices. Medicare uses contractors to process and pay Part A and Part B claims. Medicare Part C—Medicare’s managed care benefit, also known as Medicare Advantage—offers beneficiaries plans that provide inpatient and outpatient drug benefits (Part A and Part B, respectively) through a network of managed care organizations. In addition, some Medicare Advantage organizations offer plans with pharmacy benefits similar to those provided under Medicare Part D. Medicare Part D provides a voluntary pharmacy benefit for Medicare beneficiaries. Beneficiaries may choose Medicare Part D plans from among those offered by private Part D-only sponsors. Part D beneficiaries may obtain drugs through retail and mail-order pharmacies. States establish and administer their own Medicaid programs within broad federal guidelines. Medicaid programs vary from state to state, but all state Medicaid programs provide inpatient and outpatient medical benefits, which include benefits for drugs administered in inpatient hospital and outpatient physician office settings. In addition, all state Medicaid programs provide a prescription drug benefit under which they pay pharmacies for drugs dispensed to Medicaid beneficiaries. States report these payments to CMS, which provides federal matching funds to states to cover a portion of these costs.benefits using a fee-for-service or managed care delivery system. In a managed care delivery system, states typically contract with managed care organizations to provide some or all Medicaid covered services to beneficiaries. Private health plans in the commercial market provide medical benefits, which include benefits for drugs administered in inpatient hospital and outpatient physician office settings, and pharmacy benefits. Private health plans offered in the commercial market include individual and group market plans. Participants in the individual market purchase health insurance directly from an insurer, through a broker, or through a state health insurance exchange. Group market participants generally obtain health insurance through a group health plan, usually offered by an employer. These plans can include fee-for-service, preferred provider organization, and health maintenance organization options. Medicare, Medicaid, and private health insurer payment practices for compounded drugs dispensed in pharmacy settings allow for the payment of FDA-approved products but vary in whether they allow payment for bulk drug substances in these compounds. As a result of version D.0 of NCPDP’s standard for pharmacy transactions, officials from the states, insurers, and Part D-only sponsors we spoke with told us that claims for compounded drugs dispensed in pharmacy settings contain sufficient information to identify when a compounded drug is dispensed and the ingredients used to make the drug by NDC. Therefore, these public programs and private health insurers are able to use NDC information from national drug compendia to determine whether the ingredients in the compounded drug are FDA-approved products or bulk drug substances. Officials from CMS, the five state Medicaid programs, four of the five insurers, and the two Part D-only sponsors provided us with information on their payment practices for compounded drugs, including those made with bulk drug substances. Of the five insurers we spoke with, one insurer owns and operates its pharmacies. Officials from this insurer told us that the insurer purchases drugs and drug ingredients, including some bulk drug substances used to make compounded drugs; therefore, the insurer’s payment practices in pharmacy settings differ from the other four insurers across the insurer’s Medicare Part D, Medicaid, and private health plans. Under Medicare Part D, federal payments are not available for non- FDA-approved products—including bulk drug substances—and inactive ingredients used to make a compounded drug. However, insurers that offer Medicare Part D benefits and Part D-only sponsors may choose to pay for bulk substances but may not submit these payments as part of the Part D transaction data CMS uses to determine federal payments to Part D plans. Officials from two insurers offering Medicare Advantage plans that include Part D drug benefits and one Part D-only sponsor we spoke with told us that they generally pay pharmacies for each ingredient in the compounded drug that is an FDA-approved product and is otherwise eligible for payment under Part D and thus do not pay for bulk drug substances. Officials from the remaining two insurers and one Part D-only sponsor we spoke with told us that they pay pharmacies for bulk drug substances but do not include these payments as part of the Part D transaction data they submit to CMS. However, in July 2014, the Part D-only sponsor that currently pays pharmacies for bulk drug substances announced its plans to discontinue payments for most of these substances by March 2015. This decision to cease paying for bulk drug substances was a result of the sponsor’s internal analyses showing that pharmacies have been increasing their billed amounts for the ingredients not covered by Part D, including bulk drug substances, used to make compounded drugs; as a result, the sponsor’s costs for these ingredients began to exceed its costs for the ingredients covered by Part D in compounded drug claims in early 2014. Under Medicaid, CMS provides federal matching dollars to states that opt to pay for compounded drugs under the prescription drug benefit, including those that contain bulk drug substances, and has issued a notice to the states informing them of this policy. Officials from four of the five state Medicaid programs and two insurers that offer Medicaid managed care plans we spoke with told us that they generally do not pay for bulk drug substances used to make compounded drugs under the prescription drug benefit. The fifth state Medicaid program and the remaining two insurers pay for only those bulk drug substances that are listed on their formulary.Officials from the fifth state Medicaid program told us that pharmacies may request to add a bulk drug substance to the state formulary, and the state will evaluate the request and the need to do so. However, these officials also told us that the state has received no such requests in at least the last 4 to 6 months. For private health plans offered in the commercial market, officials from three insurers we spoke with told us that they generally do not pay for bulk drug substances used to make compounded drugs and pay only for those ingredients in the compound that are FDA- approved products under their prescription drug benefit. Officials from one of these three insurers told us that the insurer requires prior authorization for all compounded drug prescription claims; officials from the other two insurers told us that they require beneficiaries to obtain prior authorization only for compounded drug claims over a certain dollar amount, regardless of whether the drug’s ingredients are FDA-approved products or bulk drug substances. The fourth insurer pays for bulk drug substances as well as FDA-approved products, provided that the bulk drug substance is not listed as the primary ingredient on the claim. Once states, insurers, and sponsors determine which ingredients they will pay for in compounded drugs dispensed in pharmacy settings, they typically calculate the amount of the payment based on common drug pricing benchmarks. These pricing benchmarks apply to both FDA- approved products and bulk drug substances used to make compounded drugs. Officials from the states, insurers, and Part D-only sponsors we spoke with told us that they generally calculate payments to pharmacies based on a negotiated price for each ingredient, such as AWP, wholesale acquisition cost, or maximum allowable cost. Some states, insurers, and one Part D-only sponsor calculate the price of each ingredient according to the pricing benchmarks and then pay pharmacies the lesser of the total calculated price for all included ingredients, the price submitted by the pharmacy, the usual and customary charge, or other payment calculations. Medicare, Medicaid, and private health insurers generally have similar payment practices for compounded drugs administered in outpatient settings, which are affected by the lack of specific billing codes for these drugs on claims. As a result, most of these public programs and private health insurers pay for compounded drugs, including both the FDA- approved products and bulk drug substances that comprise these drugs, because they may be unable to identify whether compounded drugs were administered and what individual ingredients were used to make the compounded drugs. For drugs administered in outpatient settings, public programs and private health insurers generally rely on specific codes for individual drugs in the Healthcare Common Procedure Coding System (HCPCS)—a standardized coding system used by public programs and private health insurers to help ensure medical claims are processed in a consistent manner—to indicate whether a beneficiary received a prescription drug, including a compounded drug, on an insurance claim. However, for the majority of compounded drugs administered in outpatient settings, no specific HCPCS codes exist; rather, providers typically bill for compounded drugs administered in outpatient settings using HCPCS codes for “not otherwise classified” drugs. Nonspecific HCPCS codes may also be used to bill for noncompounded drugs that lack specific HCPCS codes. Public programs and private health insurers may conduct further reviews of outpatient claims to determine whether the drug billed under a nonspecific HCPCS code is a compounded drug and to identify its ingredients in order to make payment decisions. Given the difficulty in identifying these drugs on insurance claims, the insurers we spoke with generally do not have specific written policies regarding payment allowances or limitations for any FDA- or non-FDA-approved ingredients used to make compounded drugs administered in outpatient settings. In addition, while CMS has a national policy for payment of compounded drugs under Medicare Part B, the agency does not have any policies regarding federal Medicaid payments for compounded drugs administered in outpatient settings and likely provides some federal matching dollars to states to pay for compounded drugs, including those that contain bulk drug substances.develop their own payment policies for these drugs. CMS, the five state Medicaid programs, and four of the five insurers provided us with information on whether they review outpatient claims, including requesting and reviewing additional documentation, with drugs billed under the nonspecific code. Of the five insurers we spoke with, officials from one insurer told us that because the insurer owns and operates its health care facilities and purchases drugs and drug ingredients—including some non-FDA-approved bulk drug substances used to make compounded drugs—the insurer is able to determine whether drugs administered to beneficiaries in outpatient settings are compounded drugs and what ingredients were used to make them. This insurer’s payment practices for reimbursing its health care facilities differ from the other four insurers across the insurer’s Medicare Advantage, Medicaid, and private health plans. Under Medicare Part B, CMS contractors manually review claims and any additional documentation, such as invoices for compounded drugs purchased by the provider. Most of the contractors do not require providers to submit NDCs for compounded drug ingredients to determine whether these ingredients are FDA-approved products or to obtain pricing information. Officials from two of the insurers we spoke with that offer Medicare Advantage plans told us that they review all claims with compounded drugs billed under the nonspecific code and request additional information. Officials from one of these insurers told us that the insurer requires providers to submit NDCs for each ingredient to determine which ingredients are FDA-approved products and does not pay for bulk drug substances, unless the insurer determines that they are medically necessary. Officials from the other insurer told us that the insurer requires providers to submit supporting documentation, including invoices that list the name and amount of each ingredient in the compounded drug. A third insurer reviews claims and requests additional documentation only when the amount for a drug billed under the nonspecific HCPCS code on a claim exceeds a certain dollar amount but does not require NDCs to determine which ingredients are FDA-approved products. For these claims, the insurer uses NDCs primarily to calculate payments, likely for all ingredients in the compounded drug. The fourth insurer does not review claims with the nonspecific HCPCS code or collect additional information and pays for all ingredients in the compounded drug. Under Medicaid, officials from two state Medicaid programs told us that these states require providers to submit NDCs for each ingredient in compounded drugs billed under the nonspecific code and review the claims and the NDCs to determine medical necessity. However, neither state uses NDCs to determine which ingredients are FDA- approved products. Both states pay for compounded drugs, including those that contain bulk drug substances, if they determine the drugs are medically necessary. Officials from two other state Medicaid programs told us that the states require providers to submit NDCs for every HCPCS drug code and not just the nonspecific code, and providers may not submit more than one NDC with the nonspecific code. For one of these states, officials told us that providers may not bill compounded drugs as single line items on claims; rather, providers must bill each ingredient with the nonspecific code and the ingredient’s NDC. This state uses the NDCs to determine which ingredients are FDA-approved products and does not pay for bulk drug substances. Officials from the other state told us that the state assigns a short list of NDCs for FDA-approved products to the nonspecific HCPCS code and updates it annually when CMS updates the HCPCS code database. Officials told us that the state’s claims processing system will automatically reject those claims with the nonspecific code that are accompanied by an NDC that is not on the state’s list. Two insurers offering Medicaid managed care plans process claims for compounded drugs billed under nonspecific HCPCS codes in a similar manner as they do for compounded drugs billed in their Medicare Advantage plans. One insurer that collects NDCs for drugs billed under nonspecific HCPCS codes for claims exceeding a certain dollar amount in its Medicare Advantage plans does not do so in its Medicaid managed care plans; rather, in its Medicaid managed care plans, this insurer collects information from the provider, either on the claim or in additional information submitted by the provider, about why a compounded drug is being administered. For private health plans offered in the commercial market, the four insurers require information about compounded drugs administered in outpatient settings and review claims in a similar manner as they do for compounded drugs billed in either their Medicare Advantage or their Medicaid managed care plans. Medicare Part B, the states, and the insurers vary in how they calculate payments for compounded drugs billed under nonspecific HCPCS codes on outpatient claims depending upon whether these entities review these claims. Medicare contractors calculate payments for compounded drugs based on the invoice price submitted by the provider, which may also include taxes and shipping fees. Officials from the state Medicaid program that requires NDCs for each ingredient and pays only for FDA- approved products told us that they calculate payment based on either the Medicare Part B rate or the pharmacy rate of reimbursement for each FDA-approved product. The state that allows for the use of the nonspecific HCPCS code with only certain NDCs calculates payment based on the wholesale acquisition cost. Four insurers that offer Medicare Advantage, Medicaid managed care, and private health plans calculate payments based on either (1) the provider-submitted price for the drug, which may include payment for non-FDA-approved bulk drug substances; (2) common drug pricing benchmarks, such as wholesale acquisition cost, for the NDC of each FDA-approved product; or (3) the state Medicaid program’s fee schedule. These insurers’ payment calculations may depend on whether the plan is public or private and whether the insurer reviews claims and additional information. The insurer that owns and operates its healthcare facilities pays the price set by the manufacturer for drugs and drug ingredients. In inpatient hospital settings, drugs, including compounded drugs, are generally not billed separately from the rest of the services the beneficiary received but are bundled together as part of the overall charge for the hospital stay or inpatient admission.officials from CMS, all five states, and all but one of the insurers we spoke with told us that they cannot determine whether a beneficiary received a compounded drug. Medicare Part A, Medicaid, and private health insurers Because these drugs are bundled, generally pay a preset rate for the cost to deliver inpatient services, including any compounded drugs administered as part of the services;the use of a particular drug—including a compounded drug—would not generally change the inpatient payment rate for a given service. Medicare’s Part B national payment policy for compounded drugs is unclear. The policy notes that federal law requires that drugs be reasonable and necessary in order to be covered under Medicare Part B and indicates the agency’s view that, to be considered reasonable and necessary, FDA must have approved the drug for marketing. Accordingly, the policy instructs Medicare contractors and insurers that offer Medicare Advantage plans to deny payments for drugs that have not received final marketing approval by FDA. The policy also indicates that payment is available for compounded drugs; however, it does not stipulate whether payment is available for ingredients in compounded drugs that are FDA-approved products only or whether it is also available for those ingredients that are bulk drug substances that have not been As noted above, most of the Part B contractors do approved by FDA.not require providers to submit NDCs for compounded drug ingredients to determine whether these ingredients are FDA-approved products or bulk drug substances and, therefore, may be paying for ingredients that are not FDA-approved. Because Medicare Part B policy for compounded drugs is unclear, it is uncertain whether payment for such ingredients is consistent with that policy. In addition to having unclear policies, CMS does not know how much it has paid for compounded drugs under Part B, the number of compounded drug claims it paid, or whether compounded drugs paid for under Part B were made using bulk drug substances. Having access to such information may help ensure that payment for such drugs is consistent with CMS policy. CMS lacks this information because the agency does not collect any information that the contractors responsible for processing Medicare Part B claims obtain during their review of claims with the nonspecific HCPCS code, including amounts paid to providers for compounded drugs based on the invoice price, which CMS officials attributed to limitations in claims processing systems. In April 2014, HHS OIG reported on payments for compounded drugs in Medicare Part B and found that neither CMS nor its contractors track compounded drug claims and confirmed what CMS officials told us about neither the agency nor the contractors being able to determine the total number of these claims, or CMS’s payments, for compounded drugs. HHS OIG recommended that CMS establish a method specifically to identify compounded drugs on those Part B claims that contain the nonspecific HCPCS code in order to track compounded drug claims, as these claims undergo manual review because of the code and not because they are for compounded drugs. HHS OIG also found that, while most Medicare contractors require providers to list the individual ingredients that made up a compounded drug billed under the nonspecific HCPCS code in a text field on a claim, they do not require NDCs for these ingredients. NDCs could be used to (a) identify whether an ingredient is an FDA-approved product or a bulk drug substance and (b) help determine ingredient price for the purposes of calculating payment. In August 2014, CMS officials told us that the agency was working to implement HHS OIG’s recommendation regarding a compounded drug indicator for Part B claims. Without specific information indicating whether a beneficiary received a compounded drug in an outpatient setting or what ingredients made up the compounded drug, CMS may be may be paying for such drugs in a manner that is inconsistent with its policy. Officials from the public programs and private health insurers we spoke with generally agreed that payment practices for compounded drugs may affect the use of these drugs. Officials from CMS, one state Medicaid program, three of the five insurers, the two Part D-only sponsors, and the three PBMs with whom we spoke stated that payment practices for compounded drugs did affect their use, specifically when public programs and private health insurers excluded payments for bulk drug substances in retail pharmacy settings. In most cases, payment exclusions for bulk drug substances resulted in a decreased use of compounded drugs in these insurers’ plans, particularly for compounded drugs dispensed in pharmacy settings. For example, according to CMS, a 2012 analysis of Part D data showed that compounded drugs comprised less than one percent of all Part D claims in that year, which is likely due at least in part to Part D drug rules that exclude payment for bulk drug substances. CMS officials told us that the small number of claims for compounded drugs is a result of the law limiting Medicare Part D payment to FDA-approved drugs. The Part D sponsor that pays for bulk drug substances in compounded drugs saw its costs for these bulk drug substances increase significantly between January 2012 and March 2014 and, as a result, will cease payments for these substances by March 2015. In contrast to this sponsor, officials from two insurers that do not pay for bulk drug substances in their Medicare Advantage plans that include Part D benefits told us that payments for compounded drugs have remained generally steady, with no significant increases or decreases. The experiences of the Part D sponsor and the two insurers suggest that Part D payment practices may affect the use of compounded drugs. Further, officials from one of the three insurers that pay for ingredients that are FDA-approved products only and do not pay for bulk drug substances in their private health plans in the commercial market told us that these practices have resulted in a decrease in compounded drug claims and payments. For example, officials from one of the insurers told us that, in 2011, the insurer’s payments for compounded drugs decreased by 205 percent in the quarter after it ceased paying for bulk drug substances. Officials from one insurer that limits payment to only FDA-approved drugs in its private health plans in the commercial market and officials from one PBM expressed concern that manufacturers of bulk drug substances and outsourcing facilities are inflating the AWP of the bulk drug substances used to make compounded drugs. Further, these officials, as well as officials from two other PBMs, told us that outsourcing facilities are actively marketing their products to physicians, who may not know what ingredients these products contain or be sure of the compounded products’ clinical benefits. Officials from one PBM said several of these outsourcing facilities are pushing certain compounded drugs onto the market through partnerships they have established with physicians who own shares in these facilities. Officials from the majority of associations representing health care providers we spoke with cited factors other than payment practices that affect the use of compounded drugs—primarily individual patient need and drug shortages. Officials from CMS and some of the states and insurers we spoke with also told us that these factors affected the use of compounded drugs. For example, officials from CMS, 11 associations, 3 insurers, and 2 pharmaceutical standards-setting organizations told us that physicians primarily prescribe compounded drugs due to individual patient needs such as (1) the lack of a commercially available product for a patient’s specific treatment needs; (2) a patient’s allergy to an inactive ingredient, such as a dye or a filler, in an available FDA-approved drug; (3) a patient’s need for a different delivery format for the drug, such as a patient who cannot swallow pills and needs a liquid formulation; or (4) a patient’s need for custom dosage requirements, such as a pediatric patient who needs a lower dosage of a commercially available drug. In addition to individual patient need, officials from 7 of the 11 associations and 2 state Medicaid programs we spoke with also cited shortages of certain FDA-approved drugs as a significant factor contributing to the need to prescribe and use compounded drugs.from one association told us that nutrition drugs that need to be administered intravenously are frequently in shortage. Patients who need these nutrition drugs sometimes require a combination of more than 20 drugs, all of which are FDA-approved. However, according to officials from this association, many of these FDA-approved nutrition drugs have been in shortage since 2010 and, therefore, clinicians have to use compounded intravenous nutrition drugs made with bulk drug substances instead. Compounded drugs account for a small but likely growing percentage of all prescription drugs dispensed in retail pharmacies, but the number of these drugs administered in outpatient settings—as well as how much public programs and private health insurers are paying for them—is unknown. The lack of information about use and payments results from the fact that, unlike retail claims, outpatient health insurance claims, including Medicare Part B claims, may not contain information specific enough to identify whether a compounded drug was administered or what ingredients were used to make it. Medicare Part B policy for payment for compounded drugs is also unclear and instructs CMS contractors to deny payment for non-FDA-approved drugs but is silent with respect to whether payment is available for ingredients—namely, bulk drug substances—in a compounded drug that are not FDA-approved. In addition, CMS may be unable to appropriately apply Medicare payment policy because CMS’s Medicare contractors do not collect information needed to determine whether each ingredient used to make a compounded drug administered in an outpatient setting is FDA-approved. As a result, CMS may have paid for compounded drugs containing bulk drug substances in outpatient settings inconsistently with its payment policy and incurred additional expenses in the process. In April 2014, HHS OIG recommended that CMS establish a method to identify Part B claims for compounded drugs, which could also help CMS to appropriately apply its payment policy. To help ensure that Medicare Part B is able to appropriately apply its payment policy for compounded drugs, we recommend that the Secretary of Health and Human Services direct the Administrator of the Centers for Medicare & Medicaid Services to clarify the Medicare Part B payment policy for compounded drugs and, as necessary, align payment practices with the policy. For example, CMS should consider updating the Medicare Part B payment policy to either explicitly allow or restrict payment for compounded drugs containing bulk drug substances and, as appropriate, develop a mechanism to indicate on Medicare Part B claims both whether a beneficiary received a compounded drug and the drug’s individual ingredients in order to properly apply this policy and determine payment. We provided a draft of this report to HHS for review, and its comments are reprinted in appendix I. In its comments, HHS disagreed with our recommendation that CMS clarify the Medicare Part B payment policy and align payment practices with the policy as necessary. HHS also provided technical comments, which we incorporated as appropriate. In disagreeing with our recommendation to clarify the Medicare Part B payment policy, HHS stated that it did not believe that clarifying the policy to specifically address payments for bulk drug substances was necessary at this time. HHS commented that the Part B payment policy does not currently distinguish between compounded drugs that contain bulk drug substances and compounded drugs that contain FDA-approved products but, rather, recognizes differences between compounded drugs and FDA- approved manufactured drugs. According to HHS, the policy allows for payment for compounded drugs prepared in a manner that does not violate the FDCA and does not permit payment for drugs manufactured or otherwise prepared in a manner that is inconsistent with the FDCA. As we state in the report, the Part B policy indicates the agency’s view that, to be eligible for Medicare coverage, FDA must have approved the drug for marketing, while at the same time indicating that payment is available for compounded drugs. We also note that neither bulk drug substances nor compounded drugs, regardless of their ingredients, are generally approved by FDA. Based on HHS’s comments, CMS does not consider the FDA-approval status of compounded drug ingredients in making Part B payment determinations and focuses on whether the drug was prepared in a manner consistent with the FDCA. We appreciate HHS’s explanation of this distinction; however, we maintain that the Part B policy should be clarified to explicitly contain this exception for compounded drugs. With regard to HHS’s comment that payment is not made for drugs manufactured or otherwise prepared in a manner that is inconsistent with the FDCA, including cases in which FDA has determined that a company is producing compounded drugs in violation of the FDCA—such as a company compounding drugs on a large scale that resembles manufacturing—the extent to which CMS ensures compliance with this policy is unclear. As noted by HHS OIG in its April 2014 report on Medicare Part B payments for compounded drugs, Medicare contractors generally review claims for compounded drugs to determine payment amounts and assign payments on the basis of the description of the drug given by the provider on the claim. The HHS OIG report was silent on whether the contractors also review these claims to determine who produced the compounded drug but noted that CMS contractors, whose specific policies and requirements for claims vary, do not necessarily require providers to submit this information. As we note in the report, and as the HHS OIG report confirmed, CMS does not collect information from the Medicare contractors on payments for compounded drugs, and neither CMS nor its contractors track claims or payment amounts for these drugs. Therefore, CMS does not know what compounded drugs the contractors paid for or whether the contractors were able to determine whether the company that produced the compounded drug had been found by FDA to be in violation of the FDCA for the purposes of denying payment in adherence with the Part B policy. In addition, it is unclear whether CMS contractors are collecting sufficient information to identify specific bulk drug substances used to make compounded drugs. This information will be necessary for the contractors to make payment determinations when FDA finalizes its lists of bulk drug substances that may or may not be used for compounding under the FDCA. In addition, HHS commented on the limitations of the Part B claims systems that would prevent CMS from collecting detailed information, such as NDCs of drug ingredients, from claims. However, HHS noted that CMS concurred with the HHS OIG recommendation to develop a modifier or other mechanism to identify claims for compounded drugs, which is consistent with our recommendation. In light of CMS’s inability to obtain detailed information about compounded drug ingredients collected by its contractors, we remain concerned that the agency is unable to ensure that payments are made in accordance with the Part B policy. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Rashmi Agarwal, Assistant Director; Shana R. Deitch; Sandra George; Jyoti Gupta; and Laurie Pachter made key contributions to this report. Compounded Drugs: TRICARE’s Payment Practices Should Be More Consistent with Regulations. GAO-15-64. Washington, D.C.: October 2, 2014. Prescription Drugs: Comparison of DOD, Medicaid, and Medicare Part D Retail Reimbursement Prices. GAO-14-578. Washington, D.C.: June 30, 2014. Medicaid Prescription Drugs: CMS Should Implement Revised Federal Upper Limits and Monitor Their Relationship to Retail Pharmacy Acquisition Costs. GAO-14-68. Washington, D.C.: December 19, 2013. Drug Compounding: Clear Authority and More Reliable Data Needed to Strengthen FDA Oversight. GAO-13-702. Washington, D.C.: July 31, 2013. | Drug compounding is a process whereby a pharmacist mixes or alters ingredients to create a drug tailored to the medical needs of an individual patient. Compounded drugs make up 1 to 3 percent of the $300 billion domestic prescription drug market. Compounded drugs and some of their ingredients are not approved by FDA. Members of Congress have questioned whether federal health care programs' payment practices create incentives for providers to prescribe these drugs. GAO was asked to examine public programs' and private health insurers' payment practices for compounded drugs. GAO examined (1) Medicare's, Medicaid's, and private health insurers' payment practices for compounded drugs and (2) the extent to which these payment practices for compounded drugs affect their use. GAO reviewed the payment policies of CMS, the five largest state Medicaid programs, five of the largest insurers that offer both Medicare and Medicaid managed care plans as well as private plans, and the two largest Medicare Part D-only sponsors. GAO also interviewed officials from these entities and from provider associations. Medicare, Medicaid, and private health insurers have varying payment practices for compounded drugs, depending upon whether compounded drugs and their ingredients can be identified on health insurance claims, and Medicare's Part B payment policy for these drugs is unclear. For drugs dispensed in pharmacy settings, claims contain sufficient information for public programs and private insurers to identify compounded drugs and their ingredients. These programs and plans use claims information to determine whether compounded drug ingredients are products approved by the Food and Drug Administration (FDA) or are bulk drug substances—usually raw powders—that are generally not approved by FDA. Two of the five insurers and one of the two Medicare Part D-only sponsors we spoke with generally do not pay for these substances in their Medicare Part D plans. Four of the five state Medicaid programs and three of the five insurers offering private health plans we spoke with generally do not pay for ingredients that are bulk drug substances in their respective plans. For drugs administered in outpatient physician office settings, claims lack information to identify compounded drugs because there are no specific billing codes for most of these drugs. Therefore, Medicare, most state Medicaid programs, and most private health insurers pay for these compounded drugs. Some public programs and private health insurers conduct further claims reviews for compounded drugs billed under nonspecific codes, including obtaining information that can be used to determine FDA-approval status of compounded drug ingredients, and make payment decisions based on this information. Additionally, the Centers for Medicare & Medicaid Services (CMS)—the agency within the Department of Health and Human Services (HHS) responsible for administering the Medicare program—has a national payment policy for compounded drugs under Medicare Part B, but this policy is unclear. The policy generally states that drugs must be FDA-approved to be paid for under Medicare. Payment may be available for compounded drugs, but the policy does not stipulate whether payment is available for ingredients that are bulk drug substances, which are generally not FDA-approved. CMS contractors who process Part B claims do not collect information on the FDA-approval status of drug ingredients and, therefore, may be paying for ingredients that are not FDA-approved products. Thus, it is uncertain whether Medicare payments are inconsistent with Part B policy. Payment practices of public programs and private health insurers may affect the use of compounded drugs when specific payment exclusions exist, such as those for bulk drug substances; however, other factors also affect the use of compounded drugs. For example, insurers that restrict payment for compounded drugs dispensed in pharmacy settings in their private health plans to only ingredients that are FDA-approved products saw significant decreases in both the number of claims and the amount of payments for these drugs after they implemented these restrictions. Individual patient need, such as the need for custom dosages, and drug shortages also affect the use of compounded drugs. GAO recommends that CMS clarify its Medicare Part B payment policy to either allow or restrict payment for compounded drugs containing bulk drug substances and align payment practices with this policy. HHS disagreed with this recommendation, stating that the Part B payment policy does not depend on drug ingredients. GAO maintains that the policy needs clarification. |
DOE program offices manage the department’s 17 national laboratories and support the department’s diverse missions, as follows: The Office of Science oversees 10 national laboratories and, for fiscal year 2012, received appropriations of more than $4.3 billion to operate these laboratories. The Office of Science is the nation’s single largest funding source for basic research in the physical sciences, supporting research in energy sciences, advanced scientific computing, and other fields. NNSA oversees 3 national laboratories and, for fiscal year 2012, received appropriations of more than $4.6 billion to operate these laboratories. NNSA helps support understanding of the physics associated with the safety, security, and reliability of nuclear weapons and maintains core competencies in nuclear weapons science, technology, and engineering. The Office of Nuclear Energy oversees 1 laboratory and received appropriations for fiscal year 2012 totaling more than $1 billion to operate this laboratory. The primary mission of the Office of Nuclear Energy is to advance nuclear power as a resource capable of meeting the nation’s energy, environmental, and national security needs by resolving technical, cost, safety, proliferation resistance, and security barriers. The Office of Fossil Energy oversees 1 laboratory and received appropriations for fiscal year 2012 totaling more than $551 million to operate this laboratory. The Office of Fossil Energy’s primary mission is to ensure reliable fossil energy resources for clean, secure, and affordable energy while enhancing environmental protection. The Office of Energy Efficiency and Renewable Energy oversees 1 laboratory and received appropriations for fiscal year 2012 totaling over $271 million to operate this laboratory. The Office of Energy Efficiency and Renewable Energy’s mission is to develop solutions for energy-saving homes, buildings, and manufacturing; sustainable transportation; and renewable electricity generation. The Office of Environmental Management oversees 1 laboratory and, for fiscal year 2012, received appropriations of about $7.6 million to operate this laboratory. The Office of Environmental Management is responsible for cleaning up hazardous wastes left from decades of nuclear weapons research and production. See figure 1 for the locations of the laboratories managed by the various program offices. DOE also maintains individual site offices that provide federal oversight at 16 of the 17 laboratories. In addition, DOE’s field CFOs are responsible for overseeing financial activities at each location.For a complete list of DOE’s national laboratories and more information on each, see appendix II. DOE’s WFO program aims to provide benefits to organizations such as U.S. companies or academic institutions that have work performed at the laboratories and to the national laboratories as well. For example, according to a 2011 DOE report to Congress, the WFO program helps to leverage DOE investment in the laboratories by further developing technical expertise for accomplishing critical tasks needed to fulfill DOE’s mission priorities. Furthermore, according to the DOE report, the WFO program can also help the national laboratories to retain highly trained scientists and engineers in support of DOE and national priorities by, for example, providing opportunities for them to stay engaged during times when mission critical work has slowed. According to officials from DOE and laboratories and DOE documentation, a potential WFO project begins when an entity desiring to have work performed becomes aware of the capabilities available at a laboratory. Entities can become aware of these capabilities in a number of different ways, including as a result of networking by laboratory researchers and scientists, by responding to funding opportunity announcements of other federal agencies, or through information posted on the laboratories’ public websites. Once a potential project has been identified, representatives of the entity (i.e., the sponsor) and the laboratory begin to negotiate the work to be performed and the estimated costs. When the laboratory and sponsor have agreed to the terms of work, DOE site officials review the proposed WFO agreement. Finally, if the terms are approved by DOE and the sponsor has provided certification of funding or advance payment for the work, a DOE contracting officer certifies that the requirements have been met for the laboratory to conduct the work. DOE’s WFO order establishes DOE policy and requirements for accepting, authorizing, and administering such work. This order applies to work for all non-DOE entities except the Department of Homeland Security (DHS), which is covered by a separate order, and is not considered to be WFO work. In addition, under the WFO order, each site office is responsible for establishing its own procedures and processes for the review and approval of work performed under WFO agreements and for conducting periodic reviews of laboratory policies and procedures for negotiating and administering WFO projects. Requirements for establishing prices and charges for materials and services sold or provided to outside entities either directly or through the department’s M&O contracts are established through DOE’s pricing order, including for the WFO program. The total amount of work performed under the WFO program, as measured by costs incurred for WFO projects, has remained relatively constant over the last 5 fiscal years overall, but the amount of WFO work performed and the sponsors of the work varied widely among the laboratories. In fiscal years 2008 through 2012, DOE performed about $2 billion of work annually under the WFO program, as measured by costs incurred (see fig. 2). Although the amount of work performed under the WFO program has remained relatively constant over the last 5 years, it has declined slightly relative to total work performed at the laboratories during this period. From fiscal year 2008 through fiscal year 2011, total work performed at the laboratories increased from $12.0 billion to $17.1 billion and fell to $16.3 billion in fiscal year 2012. As a result, the proportion of WFO performed as a percentage of total work performed declined from 17 percent in fiscal year 2008 to about 13 percent in fiscal year 2012. In fiscal year 2012, more than 6,500 WFO projects were carried out at DOE’s laboratories. During the period we reviewed, each of DOE’s 17 national laboratories performed some work on WFO projects, with some laboratories involved in significantly more WFO activities than others. In fiscal year 2012, the amount of work performed by the laboratories on WFO projects ranged from about $1.5 million at the National Energy Technology Laboratory to over $803 million at the Sandia National Laboratories. The proportion of WFO activities relative to all work at the laboratories also varied widely. Specifically, work performed on WFO projects as a percentage of total work performed at the laboratories ranged from less than 1 percent at the National Energy Technology Laboratory to nearly 33 percent at the Sandia National Laboratories (see table 1). According to DOE officials, the variation in the amount of WFO performed is due in large part to differences in the core mission capabilities of each laboratory. For example, Sandia National Laboratories has extensive expertise in systems engineering, a capability that is heavily utilized by other federal agencies. According to the officials, other laboratories’ capabilities are less in demand and, therefore, less WFO work is performed at these laboratories. DOE’s laboratories carried out a variety of WFO projects for many different sponsors, but the majority of the work was for other federal agencies—in particular, the Department of Defense (DOD). Of the $2.1 billion in work performed on WFO projects in fiscal year 2012, over $1.8 billion, or about 88 percent, was for other federal agencies, with DOD sponsoring about $1.5 billion, or 71 percent of the work performed (see fig. 3). The majority of the work performed under the program for DOD for fiscal year 2012 was carried out at six national laboratories—Idaho, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest, and Sandia. The type of work sponsored by DOD included a variety of projects. For example, Pacific Northwest National Laboratory has developed a software technology for the Air Force that performs, among other things, automatic topical analysis and organization of large collections of documents graphically. The laboratory is training the Air Force on applying the software to a diverse set of Air Force data to help analysts extract information to better identify potential uses for emerging technologies. In another example, the Idaho National Laboratory has applied its expertise in the area of laser decontamination of surfaces to develop and deploy a laser cleaning system for the Army. The objective of the project is to develop a system to, among other things, remove chemical agent residues from contaminated surfaces and equipment so that equipment can be reused, reduce personnel exposure risks, and reduce secondary waste streams. Other federal agencies also sponsored a variety of WFO projects in fiscal year 2012, such as climate change and energy efficiency research conducted at the Lawrence Berkeley National Laboratory for the Environmental Protection Agency, and plutonium production research conducted at the Idaho National Laboratory for the National Aeronautics and Space Administration’s (NASA) space exploration program. For more information on the amount and type of work carried out at each laboratory, see appendix III. Nonfederal entities sponsored $251.5 million, or 12 percent, of the WFO performed at the laboratories in fiscal year 2012. Sponsors and types of projects included the following: State and local governments. For example, officials in Sonoma County, California, entered into an agreement with the Lawrence Berkeley National Laboratory to conduct research on local rivers and dams to better understand the natural riverbed processes that occur as a function of dam and pumping operations, including sedimentation and evolution of biomass. These processes can lead to clogging of the riverbed, which in turn can limit the ability to pump water from beneath the riverbed as is needed for subsequent distribution as drinking water in the county. Under the current phase of this WFO project, the laboratory is using its hydrological, geochemical, and biological tools and expertise to quantify the riverbed clogging mechanisms and to represent them in a computer model. Colleges and universities. For example, researchers at the University of Chicago studied the structures of proteins and, in particular, how proteins in lung cancer patients are affected by certain drugs. These researchers set up a WFO agreement with Argonne National Laboratory, under which the laboratory drew upon its extensive protein sequence database and its research capabilities to assist the university in analyzing, modeling, detecting, and characterizing proteins in lung cells. Under this agreement, Argonne National Laboratory also updated and maintained a database of protein structures that will be accessible to the broad biology community. Private industry. For example, GE Global Research, a private research laboratory, has entered into a WFO agreement with Lawrence Livermore National Laboratory to help refine wind prediction and control capabilities at wind farms. Under this agreement, the laboratory is evaluating high-resolution wind farm modeling tools in natural terrain using high-performance computers to run wind simulation and forecasting models. Foreign entities. For example, following the events at the Fukushima Daiichi Nuclear Power Station resulting from the earthquake and tsunami in March 2011, Tokyo Electric Power Company (TEPCO) entered into a WFO agreement with the Savannah River National Laboratory and the Pacific Northwest National Laboratory. The laboratories were to evaluate and define the scope of work for TEPCO to accomplish a number of tasks including the prevention of underground water contamination and the treatment and disposal of nuclear waste. The laboratories each drew upon their expertise in these areas to develop a schedule for how the work could proceed and to identify the necessary equipment, materials, facilities, personnel, and the costs to perform the work. DOE has not ensured that WFO program requirements are consistently met. Specifically, DOE has not ensured compliance with requirements for the approval of WFO projects, cost recovery, program reviews, and annual reporting. According to DOE’s WFO order, WFO projects must meet specific DOE requirements, including being consistent with or complementary to DOE’s missions and those of the laboratories, not hindering those missions, and not placing the laboratories in direct competition with the domestic private sector. A DOE contracting officer or other authorized DOE designee is required to determine whether a proposed WFO project has met all of these requirements before approving or certifying the work. These determinations may not be delegated to the laboratories. However, DOE officials from site offices at 8 of the 17 laboratories told us that the laboratories provide written justification in the WFO package to support, or determine, that some or all of the requirements are met and that the DOE officials have often accepted the laboratories’ determinations without taking steps to independently verify them. DOE officials cited various reasons for relying on the laboratories to make the determinations. For example, an official from one site office told us that he relies on the laboratory’s determination that WFO projects are consistent with the mission. He explained that he does not believe that the laboratory would accept work that would be inconsistent with its mission. Similarly, an official at another DOE site office explained that she relies on the laboratory’s determination that the work cannot be performed by the private sector because she believes the laboratory staff are better informed about the capabilities available in the private sector. By relying on the laboratories’ determinations without taking steps to independently verify the information, DOE does not have assurances that the WFO projects selected meet DOE’s requirements. The DOE Office of Inspector General has identified the WFO program as a priority area and is reviewing laboratories with major WFO programs to determine whether they meet internal control and compliance requirements established by DOE. DOE’s pricing order requires that the department charges for the full cost of materials and services provided to external organizations, including the amounts charged for work under the WFO program. DOE has not ensured, however, that all laboratories have formal, written procedures for developing WFO project budgets or charging costs to ongoing projects, two important steps for recovering the full costs of materials and services provided.procedures for developing WFO project budgets and charging costs to projects that include, among other things, detailed instructions on the types of costs to include in a WFO project budget and specific instructions for calculating the costs and for ensuring that the costs of the work are charged to the sponsor. However, while the remaining laboratories did provide a description of their WFO budget development and cost charging processes, five of these laboratories had limited written procedures or tools in place for these processes. For example, one laboratory had a template that could be used to prepare a WFO budget but did not have detailed, written requirements or procedures for using this tool. The remaining seven laboratories did not provide any formal, written procedures to guide the development of WFO project budgets or the charging of costs. Without detailed, written guidance, DOE may not be able to ensure that its cost recovery requirements are consistently met. Specifically, five laboratories have detailed, written In addition, DOE field CFOs do not always review costs charged to WFO projects in accordance with DOE’s pricing order, which requires that DOE’s field CFOs conduct biennial reviews of the pricing of materials and services and other costs charged to WFO projects at the laboratories. Under the pricing order, the reviews are required to include steps to ensure that (1) prices charged conform to the requirements of OMB Circular A-25 and departmental pricing policy or other legislative authority, as applicable; (2) adequate documentation exists for prices established for materials and services; and (3) exceptions to the full cost recovery requirements were limited to those authorized in the order. The reviews are intended to provide assurance to the department’s CFO that the full cost of the work, including all applicable direct and indirect costs, is charged to the sponsor of each WFO project. We requested DOE’s most recent biennial review reports of WFO project pricing for each of the 17 laboratories. Reviews for 16 of the 17 laboratories were provided. These reviews were conducted by the seven field CFOs that oversee the laboratories and covered fiscal years 2010 and 2011. We found that 6 of the 16 reviews did not include steps to ensure that prices for WFO projects conformed to DOE requirements. For example, in order to ensure that adequate documentation exists for prices established for materials and services charged to sponsors of WFO projects, reviewers need to examine pricing documentation for a sample of WFO projects. Reports for 6 reviews indicated that, for a sample of WFO projects, pricing documentation for overhead and administrative costs was examined, but the reports did not provide evidence that documentation to support direct costs—such as labor and materials—was examined. Furthermore, one biennial pricing review was conducted by the laboratory instead of by the DOE field CFO, as required, and no details on the review steps were provided in the report. Reports for 9 of the 16 reviews indicated that a sample of WFO projects was reviewed. Reviewing a sample of WFO projects can provide useful information, such as identifying errors in costs charged to those projects. For example, 2 of the reviews that included an examination of a sample of WFO project documentation identified errors in general and administrative costs charged to WFO projects that resulted in either undercharges or in overcharges to the sponsor. In the case of undercharges, DOE paid part of the sponsor’s cost of the WFO project. In the case of overcharges, the sponsor subsidized a portion of DOE’s mission work. The DOE Office of Inspector General has also identified errors related to the charging of general and administrative costs to WFO projects in their audits of WFO programs at the laboratories. For example, a 2013 review at Lawrence Berkeley National Laboratory found that costs of administering WFO projects were allocated in part to other DOE projects, resulting in estimated $400,000 in project costs that the sponsor did not reimburse to the laboratory. DOE’s WFO order requires each DOE program office to annually review the WFO program at each of its laboratories to ensure compliance with WFO policies and procedures. The order does not specify what the reviews should include. As a result, the program offices varied in what they consider to be their annual review. For example, DOE Office of Science program officials explained that they believe their annual WFO planning efforts with the laboratories fulfill the program review requirements in the WFO order. Specifically, each of the Office of Science laboratories develops a plan that contains a section that summarizes the laboratory’s WFO portfolio and discusses the WFO strategy for the future. DOE Office of Science officials told us that, as part of this process, each federal site office manager provides a review of the laboratory’s ongoing WFO program operations and proposed WFO funding level that includes a statement about the adequacy of the laboratory’s management and oversight of WFO activities. However, these plans were primarily focused on planning for WFO project work to be performed in the future and were not reviews to ensure compliance with WFO policies and procedures. NNSA officials told us that they had performed annual reviews of their WFO program since 2008; however, in response to our request for information, they provided one briefing from 2012 that focused on improving the WFO agreement processing times. Program officials also conducted reviews of WFO policies and procedures at 6 of the 16 site offices. However, officials from the Office of Science told us that these reviews were not conducted to satisfy the annual review requirement in the WFO order because they focused on the site offices’ WFO policies and procedures rather than laboratories’ policies and procedures. Specifically, as of May 2013, Office of Science program officials conducted a total of six reviews—one at each of 6 site offices— since 2008. These generally consisted of a review of WFO program policy and procedure documents at the site offices including flow charts, forms, and correspondence. Each of the six reviews identified areas of concern within the WFO programs. For example, one program office review reported that the laboratory was dependent on funds from a single WFO project to support a major DOE program. The review cautioned that if the WFO project was discontinued, it could create a detrimental financial burden on the laboratory in the future, which is not consistent with DOE’s requirement to avoid doing so. In addition, four of the reviews found deficiencies related to WFO program documentation or procedures. For example, one review reported that a DOE site office WFO program procedural guide was out of date and did not reflect all relevant DOE policy requirements or current DOE site office operating procedures. Although the reviews were not conducted to satisfy the annual review requirement in the WFO order, the reviews appeared to include useful information about the WFO program that was not included in the annual laboratory planning processes, which were focused on future WFO efforts. No program office reviews were conducted at the other 10 site offices. DOE’s WFO order also requires that the DOE headquarters CFO prepare an annual summary report of WFO activities performed at its laboratories. DOE headquarters officials told us they have not produced this report in the past several years because the requirement to produce it is an outdated requirement that was put into place to facilitate data collection before the implementation of DOE’s current financial system. The officials said that they plan to eliminate this requirement. They also said that they have made better use of their limited program resources by choosing to fulfill requests for WFO project data from Congress and others on a case- by-case basis. For example, in the Conference Report to accompany the Energy and Water Development and Related Agencies Appropriations Act of 2010, DOE was directed to submit an annual report on the status of its WFO activities. DOE officials provided the requested information on WFO activities; however, this information might have already been available if DOE had prepared the annual summary report as required by the WFO order. In addition, other DOE programs regularly need and collect WFO project data that could be provided by the annual summary report if it had been prepared in accordance with the order. For example, DOE’s technology transfer coordinator has been collecting data on nonfederal WFO project activities from each of the laboratories since 2001 for inclusion in the Department of Commerce’s annual report on technology transfer. Choosing to report data on a case-by-case basis, rather than in an annual report, may make it difficult for those providing oversight and for some users of the data. This is because the data are not readily available, requiring DOE to generate it, which is time-consuming and, because depending on how the data are generated, they may not be comparable across laboratories or over time. DOE has not measured the extent to which WFO program objectives are being met, even though DOE site offices are required under the WFO order to measure their laboratories’ WFO program performance. Some DOE site offices and laboratories have taken steps to evaluate WFO program processes, but these steps are not consistent across the laboratories, generally do not address the program objectives in the WFO order, and do not incorporate key attributes of successful performance measures. DOE’s WFO order requires that DOE site offices establish performance goals and measures to assess field performance of the WFO program at the laboratories they oversee, including the effectiveness and impact of WFO program processes and improvements.President directed each agency with a federal laboratory to establish Moreover, in 2011, the performance goals, measures, and evaluation methods related to technology transfer. Although the presidential memo does not specifically mention the WFO program, an objective of the program is to transfer technology originating at DOE facilities to industry for further development or commercialization. We found that, although some DOE site offices and laboratories have taken steps to evaluate the performance of the WFO program, these steps do not directly address the WFO program objectives. Moreover, DOE site offices’ and laboratories’ efforts to evaluate the WFO programs are focused on reviewing processes; including tracking the number of WFO agreements and improving timeliness of project selection and approval, rather than developing goals and measures for assessing performance against WFO program objectives. According to our discussions with officials from DOE headquarters and site offices and laboratory representatives, efforts to evaluate the performance of the WFO program at the laboratories include the following: Assessing customer satisfaction or other qualitative measures. Officials from 4 of the 17 laboratories told us that they have some mechanisms in place to collect qualitative information about WFO projects, such as customer satisfaction. This information is shared with DOE site officials. For example, officials at one laboratory send surveys to sponsors to assess customer satisfaction after the completion of WFO projects. Another laboratory reports to DOE on success stories about WFO projects once they are complete. Tracking the number of WFO agreements. Officials from 6 of the 17 laboratories told us that they track the amount of or have set targets related to the number of WFO agreements in place. Evaluating WFO agreement processing times. Officials from 6 of the 17 laboratories told us that they have goals to streamline the WFO agreement process, as measured by the time it takes from initiation to approval of a WFO agreement. In addition, in 2009 DOE commissioned a study of the time it takes to process WFO agreements across the laboratories and identified best practices for streamlining that process. Moreover, in 2011 NNSA commissioned a similar study of processing time for WFO and other interagency agreements at the 3 laboratories that it oversees. We have previously reported on the nine attributes most often associated with successful performance measures, which are summarized in table 2. Our analysis shows, however, that the steps DOE and the laboratories have taken to evaluate performance did not include some of these key attributes. For example, while customer satisfaction surveys or other qualitative measures, such as success stories gathered by the laboratories, may provide some useful information and indicate areas for improvement, customer satisfaction is not included in the WFO order as a program goal. Furthermore, the performance measures do not directly link with the WFO program goals or objectives such as providing access to DOE laboratories to accomplish goals that may be otherwise unattainable by federal agencies and nonfederal entities. Without such linkage, DOE and decision makers may not have the needed information to track the program’s progress in meeting its objectives. Additionally, some WFO qualitative measures such as customer satisfaction may lack clarity and a measurable target, making it difficult to compare performance across laboratories. These types of qualitative measures also may not meet the key attribute of objectivity due to the potential for bias or other manipulation, depending on how the information is gathered and assessed. Other efforts to measure the performance of the program—specifically, the number of WFO agreements in place and WFO agreement processing time—both provide some helpful information but do not include all key attributes of successful performance measures. For example, tracking the number of agreements is clear and measurable and provides some information about the number of WFO projects at a laboratory. However, without linkage to the program’s objectives, measuring the number of agreements in place does not capture the program’s effectiveness in meeting the program’s objectives laid out in the WFO order, such as maintaining core competencies and enhancing the science and technology base at the laboratories. As we have reported,the ability to track the progress they are making toward their mission and objectives. We also have found that performance measures can create powerful incentives to influence organizational and individual behavior. Our analysis indicates that site office and laboratory performance measures lack these key attributes. In addition, DOE headquarters officials told us that there are no performance measures to assess the WFO program’s performance against WFO program objectives across all laboratories. The officials said that this is because the WFO program is decentralized, and laboratories are managed individually. performance measures provide organizations with Recent external and internal reviews of the DOE laboratories have recommended that clear performance measures are needed to measure laboratory WFO program performance against the WFO program objectives. In January 2013, the National Academy of Public Administration (NAPA) reported that while DOE officials at the laboratories have measures to assess DOE funded work; these measures do not always apply to non-DOE funded work such as work performed NAPA also noted that DOE’s decentralized under the WFO program. approach to managing the WFO program raised questions about DOE’s ability to oversee the program as a whole. NAPA recommended that DOE include measures for WFO work performed in its evaluations of laboratory performance. Similarly, in 2012, a working group set up by DOE headquarters to oversee efforts at the laboratories to share technology with non-DOE entities, reported concerns that measures did not exist to evaluate the impact and success of WFO work activities in achieving program objectives. DOE officials told us that, because WFO agreements are unique to each laboratory, they do not believe it is appropriate to develop one set of measures for all laboratories and that they have no plans to do so. However, without measures that apply to all laboratories, it is difficult to compare performance across laboratories in meeting overall program objectives. NAPA, U.S. Department of Energy: Positioning DOE’s Labs for the Future: A Review of DOE’s Management and Oversight of the National Laboratories (Washington, D.C.: January 2013). NAPA was established in 1967 and chartered by Congress as an independent, nonpartisan organization to evaluate, analyze, and make recommendations on the nation’s most critical and complex public management, governance, policy and operational challenges. DOE laboratories’ highly specialized facilities, cutting-edge technologies, and highly trained scientists, technicians, and other staff represent a significant investment of public funds. DOE’s WFO program has allowed the department to share these capabilities with both other federal agencies and nonfederal entities. DOE has established WFO program requirements—including for project approval, cost recovery, program reporting, and program review—that together are intended to help DOE operate a successful WFO program and avoid adverse impacts on the laboratories’ missions and facilities and to avoid competition with the private sector. DOE falls short, however, in ensuring that these requirements are consistently met. For example, DOE has frequently relied on the laboratories to determine whether WFO projects selected meet the requirements of the WFO order, and DOE officials have accepted the laboratories’ determinations without taking steps to independently verify these determinations. Ensuring that the WFO projects selected meet DOE requirements is a governmental responsibility and, according to the WFO order, a DOE contracting officer or other authorized DOE designee is required to determine whether a proposed WFO project has met all of these requirements before approving the work. By relying instead on the laboratories to make these determinations, DOE cannot ensure that all WFO projects meet requirements. DOE may also not be able to ensure that the costs of WFO projects are recovered according to its pricing order because it has not required the laboratories to establish written procedures to guide development of project budgets or charging of costs to projects, important steps for determining and recovering the full costs of WFO projects’ materials and services. The department’s CFO also does not have assurance that the full costs of WFO projects are charged to the projects’ sponsors because field CFOs do not always conduct biennial pricing reviews according to requirements. Furthermore, the WFO order does not specify what should be included in the annual WFO program reviews required by the order. Without clear and specific requirements, it may be difficult for DOE to identify WFO program deficiencies, if any. In addition, DOE officials told us that the requirement to prepare an annual summary report of WFO activities performed at its laboratories is outdated and has not been followed for several years, and that the department plans to eliminate this requirement. However, members of Congress have directed DOE to provide information on the status of WFO activities, and other DOE programs regularly need and collect WFO project data that would be available if DOE prepared the annual summary report. Choosing to report data on a case-by-case basis, rather than in an annual report, may make it difficult for those providing oversight and some users of the data, and because the data are not readily available, DOE will need to generate them, which is time-consuming, and the data may not be comparable across laboratories or over time. Furthermore, while some site offices have made efforts to evaluate the performance of the WFO program, these efforts do not always incorporate key attributes of successful performance measures, such as being quantifiable or having a numerical goal. Moreover, these efforts generally do not directly address the objectives of the WFO program. Without better measures to evaluate program performance, DOE and decision makers will not have the needed information to track the program’s progress in meeting its objectives. To improve DOE’s management and oversight of the WFO program, we recommend that the Secretary of Energy take the following six actions: Ensure compliance with the requirements in the WFO order for project approval. Require laboratories to establish and follow written procedures for developing WFO project budgets and for charging costs to WFO projects. Ensure compliance with the requirements for conducting biennial pricing reviews. Specify in the WFO order what the annual WFO program reviews should include. Ensure that annual summary reports of WFO activities are prepared so that data on those activities are readily available for those who need this information. Establish performance measures that incorporate key attributes of successful performance measures and that address the objectives of the WFO program. We provided DOE with a draft of this report for its review and comment. In written comments, reproduced in appendix IV, DOE stated that it concurred with the recommendations in the report and provided information on planned actions to address each recommendation. We believe that many of the proposed actions, while good first steps, fall short of our recommendations, however, and may not fully address the issues we discussed in our report. For example, in response to the first three recommendations (i.e., ensure compliance with the requirements in the WFO order for project approval; require laboratories to establish and follow written procedures for developing WFO project budgets and for charging costs to WFO projects; and ensure compliance with the requirements for conducting biennial pricing reviews), DOE stated that it will issue a policy flash, or notice, on the requirements of the WFO program. While reminding staff of the requirements of the WFO order would likely be beneficial, to improve its management and oversight of the WFO program, DOE also needs to take steps to ensure that these requirements are consistently being met by periodically monitoring the processes for project approval, full cost recovery of projects, and biennial pricing reviews. DOE also stated that it concurred in principle with the fourth recommendation to specify in the WFO order what the required annual program reviews should include. DOE added, however, that the current WFO order appropriately provides discretion to DOE program offices in determining the scope and extent of WFO program reviews. Again, DOE states that it will issue a policy flash reminding the program offices of the annual review requirement for the WFO program. DOE’s planned action, however, does not directly address our recommendation to update the WFO order with specific requirements for these reviews. Likewise, DOE stated that it concurred in principle with the fifth recommendation that it ensure that annual summary reports of WFO activities are prepared so that data on those activities are readily available for those who need this information. However, rather than preparing annual summary reports, DOE stated that the WFO order requires revision to reflect its current practice of providing current WFO program summary information. As we said in our report, choosing to report data on a case-by-case basis rather than in an annual report may make it difficult for those providing oversight, and for some users of the data, because the data are not readily available and DOE will need to generate them, which is time- consuming, and they may not be comparable across laboratories or over time. Finally, in response to the sixth recommendation to establish performance measures that incorporate key attributes of successful performance measures and that address the objectives of the WFO program, DOE plans to issue a policy flash on the current requirements of the WFO program related to program assessments. In our report, we discussed that the WFO order, however, does not include specific requirements for program assessments and a policy flash that repeats the current requirements for program assessments would, therefore, not address the recommendation. DOE also provided clarifying comments that we incorporated, as appropriate. In particular, DOE requested that we consider its laboratory contractors’ Cost Accounting Standards (CAS) disclosure statements describing their cost accounting practices and procedures as written procedures for charging costs to ongoing projects. We have added information to our report about the CAS disclosure statements. While these statements describe how the laboratory plans to allocate costs, we do not agree that these disclosure statements constitute procedural guidance for developing project budgets and charging costs to projects. Moreover, as we point out in the report, we found that several laboratories have developed detailed written procedures for developing project budgets and charging costs to projects, which would be unnecessary if the disclosure statements were sufficient as guidance. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Energy, the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To determine the amount and type of work conducted at the laboratories under the Department of Energy’s (DOE) Work for Others (WFO) program, we requested and obtained DOE data on WFO costs from DOE’s Office of the Chief Financial Officer (CFO) for fiscal years 2008 through 2012, including WFO program costs by sponsor and by laboratory, as measured in costs incurred. We also requested and obtained total costs incurred for each laboratory for these same fiscal years. We selected costs incurred as the measure of the amount of WFO projects performed for our purposes because WFO agreements can span multiple years and these costs are similar to the costs presented in reports by DOE and its Office of Inspector General. We used data for fiscal years 2008 through 2012 because our last report on DOE WFO projects reported data through fiscal year 2008. We interviewed DOE officials who oversee and collect these cost data for DOE’s Office of the CFO and determined that these data were obtained from DOE’s Standard Accounting and Reporting System (STARS) financial reporting system. To assess the reliability of DOE’s WFO project cost data and total laboratory costs, we reviewed information about DOE’s STARS financial reporting system, as well as a recent external audit of the STARS system and DOE’s financial data. We also reviewed recent GAO assessments of the STARS system. No material weaknesses were reported. For data verification purposes, we obtained fiscal year 2012 cost data on WFO projects from each laboratory and compared that data with the fiscal year 2012 cost data on WFO projects obtained from the DOE CFO. Although we identified some differences between the data, through discussions with the DOE CFO, we determined that those differences were the result of variances in the cost elements that were included in the information provided by each laboratory. Based on our assessment, we concluded that the data obtained from the DOE CFO were sufficiently reliable to describe the dollar amount and percentage of WFO projects performed at the laboratories. To determine the number of WFO projects that were active during fiscal year 2012, we used information from the fiscal year 2012 WFO project lists provided by each laboratory, as the DOE CFO does not have the capability to identify individual nonfederal projects. To determine the type of WFO projects performed at DOE’s laboratories, we gathered data on the costs of WFO activities performed for federal versus nonfederal sponsors. We also reviewed the lists of WFO projects provided by each laboratory, we conducted interviews with DOE headquarters officials, and we conducted structured interviews with DOE site officials and officials for the 17 laboratories to gather more information about examples of WFO projects performed for one federal and one nonfederal WFO sponsor from each laboratory with activity during fiscal year 2012. We judgmentally selected three laboratories to visit; including the National Nuclear Security Agency’s (NNSA) Sandia National Laboratory in Albuquerque, New Mexico, the DOE laboratory with the largest dollar amount of WFO project work conducted in fiscal year 2012. We also visited NNSA’s Los Alamos National Laboratory in Los Alamos, New Mexico, due to its size and proximity to Sandia National Laboratory; and Pacific Northwest National Laboratory—a larger Office of Science of laboratory located in Richland, Washington. We contacted officials at the remaining 14 laboratories by phone. To determine the extent to which DOE ensures that WFO program requirements are met, we reviewed federal regulations, DOE requirements, and DOE and laboratories’ policies and procedures governing the WFO program, including for selection and approval of WFO projects and for cost recovery—establishing project budgets and charging costs to ongoing projects—for WFO projects. We discussed these policies and procedures and how they are carried out in practice in our structured interviews with DOE site office and laboratory officials responsible for the WFO program at each laboratory. We reviewed and analyzed biennial pricing reviews conducted by the DOE field CFOs responsible for oversight of each laboratory covering fiscal years 2010 and 2011, the most recent data available. We discussed procedures for review and approval of WFO project proposals in our structured interviews with DOE site office and laboratory officials. We discussed DOE requirements for WFO project pricing, pricing reviews, and review results with officials from DOE’s CFO offices at headquarters and the seven DOE field CFOs that oversee the laboratories. We also reviewed DOE program office and site office review reports on laboratory WFO programs that were conducted from calendar years 2008 through 2012 to identify findings related to the WFO programs. For additional information applicable to all of our objectives, we reviewed external reports on the WFO program, including several reports issued from calendar years 2009 through 2013 by the DOE Office of Inspector General, a December 2010 report issued by the Department of Defense Office of Inspector General, a February 2012 report issued by the National Research Council, and a January 2013 report issued by the National Academy of Public Administration. We contacted officials from these external entities to discuss their reports. To determine the extent to which DOE and the laboratories have measured WFO program performance against WFO program objectives, we reviewed DOE orders, performance requirements in DOE’s contracts with laboratory management and operating contractors, DOE evaluations of laboratories’ performance, and laboratories’ strategic plans. We also reviewed policies and procedures for establishing and for executing WFO projects at each laboratory. We discussed laboratories’ strategies and goals for their WFO programs and WFO program performance measurements with DOE headquarters officials, and in our structured interviews with DOE site office officials, and with officials from each of the laboratories. Research area Rare earths and other critical materials, applied energy, fossil energy, and nonproliferation programs. Physical, energy, environmental, and life sciences; energy technologies and national security. Experimental and theoretical particle physics, astrophysics, and accelerator science. Sustainable energy and national and homeland security. Particle and nuclear physics; physical, chemical, computational, biological, and environmental systems. National defense, nuclear weapons stockpile stewardship, weapons of mass destruction, and nuclear nonproliferation. Los Alamos, NM National defense, nuclear weapons stockpile stewardship, weapons of mass destruction, and nuclear nonproliferation. Environmental stewardship, clean energy. Renewable energy and energy efficiency research. Neutron scattering, advanced materials, high-performance computing, and nuclear science and engineering. Electricity management, sustainability, threat detection and reduction, in situ chemical imaging and analysis, simulation and analytics. Plasma and fusion energy sciences. National Laboratory Sandia National Laboratories Albuquerque, NM National defense, weapons of mass destruction, transportation, energy, telecommunications, and financial networks, and environmental stewardship. Environmental stewardship, national and homeland security, clean energy. Menlo Park, CA Materials, chemical and energy science, structural biology, and particle physics and astrophysics. Fundamental nature of confined states of quarks, gluons, and nucleons; superconducting radio-frequency technology. Appendix III: Department of Energy National Laboratory Work for Others Costs and Total Costs, as Measured in Costs Incurred, in Fiscal Year 2012 Total Work for Others (WFO) In addition to the individual named above, Dan Feehan and Janet Frisch, Assistant Directors; Joseph Cook; Elizabeth Curda; Paul Kinney; Jeff Larson; Cynthia Norris; Josie Ostrander; Kathy Pedalino; Tim Persons; Cheryl Peterson; Carl Ramirez; and Kiki Theodoropoulos made key contributions to this report. | DOE's 17 national laboratories house cutting-edge scientific facilities and equipment, ranging from high-performance computers to ultra-bright X-ray sources for investigating fundamental properties of materials. DOE allows the capabilities of the laboratories to be made available to perform work for other federal agencies and nonfederal entities through its WFO program, provided that the work does not hinder DOE's mission or compete with the private sector, among other things. GAO was asked to review the WFO program. GAO examined (1) the amount and type of work conducted under the program, (2) the extent to which DOE has ensured that WFO program requirements are met, and (3) the extent to which program performance is measured against WFO program objectives. GAO reviewed DOE and laboratory data and documents, internal and external review reports, and interviewed officials from DOE and the laboratories. In fiscal years 2008 through 2012, the Department of Energy (DOE) performed about $2 billion annually of Work for Others (WFO) projects, as measured by the costs incurred. Although the amount of WFO performed has remained relatively constant over the last 5 years overall, WFO as a percentage of the total work performed at the laboratories--measured in total laboratory costs incurred--has declined from 17 percent in fiscal year 2008 to about 13 percent in fiscal year 2012. In fiscal year 2012, the WFO program included more than 6,500 projects. About 88 percent of this work was for other federal agencies, with the majority of it performed for the Department of Defense. For example, one project for the Army applies a laboratory's expertise in laser decontamination of surfaces to develop a system that will remove chemical agent residues from equipment. The remaining WFO work was sponsored by nonfederal entities, including state and local governments, universities, private industry, and foreign entities. DOE officials have not ensured that WFO program requirements are consistently met. For example, a DOE official is required to determine whether a proposed WFO project has met DOE requirements for accepting work before approving, or certifying, the work and this responsibility may not be delegated to the laboratories. However, DOE officials from site offices at 8 of the 17 laboratories reported that these determinations were made by the laboratories and that the DOE officials did not take steps to independently verify the determinations prior to approving the work. DOE also cannot ensure that the full costs of materials and services for WFO projects are charged to sponsors because 12 of 17 laboratories have limited or no written procedures for developing WFO project budgets or charging costs to ongoing projects, two important steps for recovering the full costs of materials and services. A 2013 DOE Office of Inspector General report found that the costs of administering WFO projects at one laboratory were allocated to DOE projects, resulting in an estimated $400,000 in WFO project costs that were not reimbursed to the laboratory. DOE requires that its program offices annually review the WFO program at each of its laboratories. However, DOE requirements do not specify what the reviews should include, and DOE program offices varied in what they consider to be an annual review. DOE also requires the department's Chief Financial Officer to report annually on the activities conducted under the WFO program, but DOE officials told GAO that they no longer produce the report because the requirement is outdated, choosing instead to fulfill data requests on a case-by-case basis. As a result, DOE does not have data that are comparable across laboratories or over time. DOE has not measured the extent to which WFO program performance is measured against program objectives and has not established performance measures to do so. Some DOE site offices and laboratories have taken steps to evaluate the performance of the WFO program, but these steps are not consistent across the laboratories, do not incorporate key attributes of successful performance measures, and do not address the WFO program objectives. Recent internal and external reviews of the laboratories have recommended that DOE establish clear measures to evaluate laboratory WFO program performance against the WFO program objectives. DOE officials told GAO that they do not believe it is appropriate to develop one set of measures for all laboratories and that they do not plan to do so. GAO recommends, among other things, that DOE take steps to ensure compliance with project approval requirements; require laboratories to establish written procedures for charging costs to projects; specify what the annual program reviews should include; produce annual reports on WFO activities; and establish performance measures for the WFO program. DOE generally agreed with the report and its recommendations. |
Our objective was to assess IRS’ performance during the 1995 filing season. To achieve our objective, we interviewed IRS National Office officials and IRS officials in the Atlanta, Austin, Cincinnati, Fresno, Kansas City, Memphis, and Ogden service centers responsible for the various activities we assessed; tested the accessibility of IRS’ toll-free telephone assistance and forms-ordering telephone lines by placing calls from Atlanta, Chicago, Cincinnati, Kansas City, New York, San Francisco, and Washington, D.C. (appendix III contains more information on our test methodology); analyzed filing season-related data from various IRS sources, including data on telephone accessibility, return filings, return processing errors, refund fraud, and the results of steps IRS took in 1995 to address the fraud problem; reviewed IRS publications, notices, and forms to determine what taxpayers were told about potential refund delays; reviewed reports on computer system performance and attended weekly meetings on computer system performance held by IRS’ National Office Command Center; and reviewed relevant IRS internal audit reports. We did our work from January through September 1995 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of Internal Revenue or her designee. On November 13, 1995, several IRS officials, including the Assistant Commissioner for Taxpayer Services, Director of Investigations (Tax Refund Fraud), Electronic Filing Executive, Director of Tax Forms and Publications, and Senior Program Analyst (Submission Processing), provided us with oral comments. Another IRS official, the Chief of Taxpayer Services, provided additional comments on November 30. IRS’ comments are summarized and evaluated on page 18 and incorporated in this report where appropriate. IRS uses various indicators to measure its filing season performance. Because IRS’ most important job during the filing season is to process tax returns, two important workload indicators are (1) the number of individual income tax returns received in total and (2) the number of returns received through alternative filing methods that IRS developed to help make the returns processing function more efficient. According to IRS data, more individual income tax returns were received in 1995 than in 1994, but the number received through the alternative filing methods decreased. Among other IRS filing season indicators are those related to workload, such as the percent of scheduled tax assistance calls answered; timeliness, such as the number of days needed to process returns or issue refunds; and quality, such as the accuracy of IRS’ answers to taxpayer questions and IRS’ processing of returns and refunds. Those indicators show that IRS met or exceeded most of its performance goals for the 1995 filing season. According to IRS data, for example, (1) IRS’ telephone assistors answered 11 percent more calls than IRS anticipated and provided accurate answers to 91 percent of taxpayers’ tax law questions; (2) 97 percent of taxpayers’ orders for tax forms and publications were filled accurately; (3) on average, refunds on paper returns were processed and issued within 36 days; and (4) service centers met deadlines for processing tax payments submitted with returns. What IRS’ indicators do not reveal are the difficulties IRS experienced in the 1995 filing season. There were several serious problems not obvious from the indicators: (1) IRS’ efforts to combat refund fraud took millions of taxpayers by surprise when closer scrutiny of their returns resulted in refunds being delayed; (2) most taxpayers who called IRS to ask questions could not get through; and (3) IRS’ implementation of a new tax return processing system fell far short of expectations. With one exception, as shown in figure 1, the number of individual income tax returns filed has increased every year since fiscal year 1987. While there was an increase in the overall number of returns filed in 1995, the number received through alternative filing methods declined. IRS offers three alternatives—electronic filing, TeleFile, and 1040PC—to the traditional filing of paper returns. As shown in table 1, the use of electronic filing and 1040PC decreased in 1995 compared with 1994, while the use of TeleFile increased. IRS attributes the drop in electronic filing to the various steps it took in 1995 to deal with refund fraud. One of those steps was to eliminate the direct deposit indicator (DDI). Because the elimination of the DDI increased the risk of the loan, lenders cut their maximum loan amount and raised their fees. Some potential electronic filers may have decided to file on paper when they found they were unable to get a refund anticipation loan or were unwilling to pay the additional fee. Other steps IRS took to deal with fraud, some of which may have also contributed to the decline in electronic filing, are discussed later. The decline in 1040PCs resulted from a private tax preparation firm, which was the largest user of 1040PCs, dropping out of the program. For the 1995 filing season, IRS required that preparers provide taxpayers with some type of descriptive printout or legend that explained each line on the taxpayer’s 1040PC return. The purpose of the legend was to provide better supporting documentation than was previously available to the taxpayers and was to be used as an aid in doing things such as preparing state returns and completing financial aid forms. According to an official of the tax preparation firm that dropped out of the program, the firm chose to stop participating rather than incur the extra cost associated with providing the legend. The growth in the third alternative, TeleFile, was due in part to its availability to more taxpayers. In 1995, TeleFile was available to certain taxpayers in 10 states—3 more states than in 1994. Some of the growth might also be due to improved accessibility. As we discussed in our report on the 1994 filing season, IRS experienced an overload of the TeleFile system in 1994, and taxpayer accessibility might have been higher had the system been able to handle the number of calls. For the 1995 filing season, IRS took several steps that increased accessibility. For example, IRS increased the number of telephone lines from 144 to 336 and stopped testing the use of voice signatures, which shortened the length of calls. In August 1995, IRS’ Internal Audit reported that the number of busy signals received by taxpayers trying to use TeleFile in 1995 decreased dramatically from 1994 and that 87 percent of the taxpayers using TeleFile in 1995 were able to access the system on the first attempt. As a possible result of that improved accessibility, the number of TeleFile filers in each of the seven states that were involved in the program in 1994 increased in 1995. As shown in table 2, IRS met or exceeded almost all of its other performance goals for 1995. We did not assess the overall appropriateness of those goals. However, as discussed in the next section, the indicators for refund timeliness, number of tax assistance calls answered, and returns processing cycle time masked serious problems that occurred in 1995. As shown in tables 3 and 4, the number of returns identified by IRS as containing fraudulent refund claims and the amount of identified fraudulent refunds that were issued before IRS could stop them have increased significantly since 1990. Because of concerns raised in several of our past products and in congressional hearings about those increases, IRS placed more emphasis on reducing fraud in 1995. In addition to eliminating the DDI, discussed earlier, those steps included closer scrutiny of SSNs and of refunds involving the EIC—problem areas that IRS had identified in the past. IRS’ efforts generated much adverse publicity when over 7 million taxpayers had their refunds delayed for many weeks. Although IRS’ decision seems prudent because of the level of possible fraud involved, it seems that IRS could have prevented some of the adverse reaction to those delays if it had done a better job of forewarning taxpayers. On a related matter, the methodology IRS used to measure refund timeliness in 1995 was flawed, in our opinion, because it excluded those refunds that were delayed. Inclusion of those refunds most likely would have increased the average refund time beyond the 36 days reported by IRS. In 1995, to better ensure the appropriateness of refund claims, IRS increased its efforts to verify the accuracy of SSNs on tax returns. When IRS received a paper return with a missing SSN or an invalid SSN (i.e., one that does not match the Social Security Administration’s records), it delayed the refund and, depending on the circumstances, contacted the taxpayer in an attempt to resolve the problem. IRS delayed refunds for up to 8 weeks on other returns (both paper and electronic), even if the returns had no missing or invalid SSNs, to allow staff time to identify duplicate uses of the same SSN and fraud schemes. Because most of the refund fraud cases IRS identified in the past involved the EIC (about 90 percent of the cases identified in 1994, for example), IRS concentrated these efforts on returns claiming the EIC. Because the delay only applied to that part of the refund attributable to the EIC, some taxpayers received two checks—one for the non-EIC part of their refund and a second, several weeks later, for the rest, assuming IRS determined that the EIC claim was valid. IRS added filters to the electronic filing system to prevent returns with missing or invalid SSNs or with SSNs that already had been used by another taxpayer from being filed electronically. As of the end of May 1995, IRS had (1) notified about 3 million taxpayers whose returns had missing or invalid SSNs that their refunds were being delayed, (2) delayed another 4 million refunds to allow time to check for duplicate SSN use and fraudulent returns, and (3) sent out about 4 million reject notices from the electronic filing system because it had identified a missing, invalid, or duplicate SSN. IRS warned taxpayers that their refunds could be delayed if they submitted a return with a missing or incorrect SSN. On the cover of the instructions accompanying Form 1040, for example, IRS warned taxpayers to check their SSNs and explained that “incorrect or missing SSNs for you, your spouse, or dependents may delay your refund.” It then referred the reader elsewhere in the instructions for details on how to get an SSN. IRS also issued several public service announcements to alert taxpayers to the need for correct SSNs. However, IRS did not do very much to warn taxpayers that their refunds might also be delayed even if their SSNs were correct. The only warning in the Form 1040 tax package or Publication 17 (Your Federal Income Tax)—the two IRS documents that most taxpayers would rely on for such information—was a statement in both documents that alerted potential electronic filers that “some refunds may be temporarily delayed as a result of compliance reviews” to ensure that the returns are accurate. Taxpayers who did not intend to file electronically—about 90 percent of the filers—were not told anything. Also, by advising only potential electronic filers of possible “compliance reviews,” IRS might have given the impression that electronically filed returns are more subject to audit than paper returns—not the kind of message that would help expand the use of electronic filing. Conversely, IRS prominently displayed, in both the Form 1040 tax package and Publication 17, its customer-service standards for 1995. One of those standards says, “If you file a complete and accurate tax return and you are due a refund, your refund will be issued within 40 days if you file a paper return or within 21 days if you file electronically.” Thus, not only were most taxpayers not told that their refunds might be delayed even if they filed a valid return, but they were led to believe the opposite by IRS’ customer-service standard. The refund delays generated much adverse reaction. Numerous news articles during the filing season cited criticism from taxpayers, executives of tax preparation services, an industry lobbying organization, and members of Congress commenting on the problems they observed during the 1995 filing season. In July 1995, IRS’ Internal Audit reported that it had advised management in December 1994 of its concerns about IRS’ decision not to publicize the potential delay of EIC refunds. Internal Audit said that IRS “could have jeopardized the public’s trust and confidence” and that “those who had already filed may have felt confused, misled, disillusioned, and perhaps angry.” Internal Audit also said that advance publicity about delaying refunds might have also deterred some unscrupulous filers. We can understand IRS not wanting to disclose the details of its plans, but we fail to see how any harm would have been caused by simply alerting taxpayers to the possibility that their refunds might be delayed even if there were no problems with their SSNs. IRS’ customer-service standard for issuing a refunds from returns filed on paper is 40 days. To track its success in meeting that standard, one of IRS’ filing season indicators is “refund timeliness.” To measure refund timeliness, IRS takes several samples of paper returns involving refunds and computes the elapsed time from the date the taxpayer signed the return to the date the taxpayer would have received the refund, allowing 2 days after issuance for the refund to reach the taxpayer. IRS’ results for the 1995 filing season indicated that refunds on paper returns were issued in an average of 36 days—the same as in 1994 and 4 days quicker than IRS’ goal. That result is misleading, however, because IRS excluded from the computation the over 7 million refunds that were delayed because of IRS’ fraud checks. Because IRS’ customer-service standard is predicated on the filing of a complete and accurate return, we agree that IRS should have excluded from its computation those refunds that were delayed because of missing or invalid SSNs (about 3 million of the 7 million delayed refunds). However, IRS did not identify any problems with the SSNs associated with about 4 million delayed refunds, and those refunds were eventually issued. Thus, consistent with IRS’ standard, those refunds should have been included in the computation of refund timeliness. Using IRS data on the number of refunds in its refund timeliness samples and the number of refunds excluded from the samples—assuming that each of the excluded refunds was delayed 8 weeks, thus taking 56 more days to issue than the 36-day average—we determined that inclusion of the excluded refunds would have increased the average to 38 days. Such a result, in our opinion, would have shown more correctly a drop in performance from the 36-day average achieved in 1994. An important indicator of filing season performance is how easily taxpayers who have questions or who want to order forms and publications are able to contact an IRS assistor on the telephone. IRS assesses its performance in that area by estimating the number of calls it expects to answer during the filing season (known as “scheduled calls”) and comparing that number with the number of calls it actually answered. For the 1995 filing season, IRS answered 111 percent of the scheduled calls to its toll-free tax assistance telephone line and 98 percent of the scheduled calls to its toll-free forms ordering line. Because IRS’ indicator is based on the number of calls IRS expects to answer rather than the number it expects to receive, the indicator masks the serious problems taxpayers have encountered in the past and encountered again in 1995 in trying to reach IRS by telephone. In reports on past filing seasons, we discussed the difficulty taxpayers had in reaching IRS by telephone (i.e., the “accessibility” of IRS’ telephone systems). Although IRS answers millions of calls each year, even more calls go unanswered. Many taxpayers receive busy signals, are kept on hold for a long time, or simply give up. Between January 1 and April 15, 1995, IRS received 236 million calls for tax assistance but was able to answer only 19 million of those calls. Our most recent report on telephone assistance accessibility offers several recommendations to improve IRS’ ability to answer more taxpayer calls. To determine whether accessibility was a problem during the 1995 filing season, we conducted two tests. One test was to determine the accessibility of the toll-free assistance for taxpayers who have questions about their accounts, the tax law, or IRS procedures. The second test was to determine the accessibility of the toll-free system that IRS tells taxpayers to call if they want copies of tax forms and publications. Our test methodology is described in appendix III along with (1) details on the results of our tests and (2) our computations of accessibility using more global IRS data. Results of both tests indicated that again this filing season taxpayers had significant problems reaching IRS by telephone. For example, of 2,821 calls we made to IRS’ toll-free assistance number, we succeeded in reaching an assistor 249 times—a 9-percent accessibility rate. Although our test of the form ordering system produced better results—a 50-percent accessibility rate—there was still much room for improvement. As in past years, our measure of accessibility is based on the percent of incoming calls answered. We recognize that the number of calls coming in does not equal the number of taxpayers seeking assistance because many taxpayers are probably calling several times in an attempt to reach an assistor. We have been working with representatives from the Department of the Treasury and IRS to develop a better way to measure IRS’ performance in terms of the number of taxpayers, but those efforts have not been completed. With one significant exception, the computer systems IRS used to process returns and remittances in 1995 generally performed without major problems. The exception was a new document imaging system that IRS used in 1995 to process several forms, including individual income tax returns filed on Form 1040EZ. To process tax returns more efficiently and economically, IRS intends to move from a system that relies on labor-intensive data transcription to one that relies on electronic data capture. Electronic filing and TeleFile are steps in that direction. For returns filed on paper, IRS plans to achieve its objective through document imaging. The Service Center Recognition/Image Processing System (SCRIPS) is the first of two planned document imaging systems. Under IRS’ new organizational structure, to be implemented over the next several years, paper tax returns are to be processed in only 5 of the 10 existing service centers. Those five sites are to be known as submission processing centers. Because imaging is the process IRS intends to use to capture data from all paper returns in the future, SCRIPS was installed in only the five service centers that are to be submission processing centers. Each of the five centers experienced hardware and software problems with SCRIPS. Those problems included hardware problems that kept documents from feeding properly into the scanner and software problems that affected SCRIPS ability to accurately capture name and address information. Two of the five centers completely stopped 1040EZ processing on SCRIPS, and the other three centers stopped processing for extended periods of time. Those stoppages caused IRS to redirect some 1040EZ processing workload back to its manual data entry system. In total, IRS was able to process only about 56 percent of the expected 8.6 million forms 1040EZ on SCRIPS that it had planned to process. As a result of the problems with SCRIPS, IRS has not yet realized the system’s intended benefits. For instance, IRS had expected that increased processing rates would result in lower labor costs. However, IRS processed fewer forms 1040EZ per hour on SCRIPS in 1995 than it did in 1994 on the old system SCRIPS replaced. Thus, SCRIPS has not yet achieved any savings in labor costs associated with processing forms 1040EZ. In addition, IRS has postponed plans to redistribute additional workload to SCRIPS and to introduce the final form scheduled for SCRIPS. Appendix IV has additional information on the effects of problems with SCRIPS. Despite the many problems that limited SCRIPS effectiveness, IRS’ “processing cycle time” indicator, which measures the average number of days it takes service centers to process returns, showed that service centers processed returns faster in 1995 than IRS expected. More specifically, IRS’ data showed that the 10 service centers, in total, processed individual income tax returns in 1995 within a range of 5 to 9 days depending on the type of form (1040, 1040A, or 1040EZ) and the processing systems used (manual data entry or SCRIPS). That compares favorably with IRS’ processing cycle time goal of 11 days. However, that comparison is misleading because IRS’ 11-day goal was much higher than the 5- to 7-day cycle times the service centers had achieved in 1994. Comparing IRS’ 1995 cycle times to its 1994 cycle times rather than to its goal for 1995 shows that the cycle times in 1995 worsened in many cases. In 1994, for example, none of the 10 service centers averaged longer than 9 days to process any type of individual income tax return. In 1995, six centers took longer than 9 days, including four of the five centers that had SCRIPS. Throughout the filing season, IRS officials worked with the SCRIPS contractor to remedy the hardware and software problems. At the conclusion of the filing season, they met to assess the cause of these problems and determine the actions needed to be taken before the next filing season. Among the actions being considered are upgrades to key components of the system that are intended to improve processing rates. We will continue to monitor IRS’ efforts to address SCRIPS problems and the effect of these efforts on IRS’ readiness for the 1996 filing season. Although IRS’ indicators point to a successful 1995 filing season, there were several problems that are not obvious from those indicators. IRS’ assertion that it issued refunds on paper returns in 1995 as quickly as it did in 1994 (i.e., within an average of 36 days), masks the fact that in 1995, unlike 1994, millions of taxpayers had valid refunds delayed for up to 8 weeks. IRS chose to exclude those refunds in computing the refund timeliness indicator, even if IRS found no problem with the refund and eventually issued it, making the indicator an inaccurate measure of timeliness in 1995. Also, while we agree that IRS needs to ensure the validity of refund and EIC claims, we believe that IRS could have avoided some of the adverse reaction caused by the refund delays if it had done a better job alerting taxpayers that even refunds on accurate returns might be delayed. A related source of potential taxpayer confusion was the apparent conflict between IRS’ promise, via its customer-service standards, to issue a refund within a certain number of days if the taxpayer filed a complete and accurate return and IRS’ decision to delay certain refunds well beyond the promised time frame while it verified that the returns were complete and accurate. Likewise, IRS’ ability to answer more calls than it estimated it could answer means little to the many taxpayers whose calls to IRS went unanswered or who gave up in frustration after receiving numerous busy signals. By focusing on the number of calls IRS expects to answer rather than the number of calls actually coming in or the number of taxpayers trying to reach IRS, the telephone assistance indicator provides a distorted picture of the accessibility of IRS’ telephone service. IRS is working to develop a better measure of accessibility. Such a measure, once developed, would be a more meaningful indicator of IRS’ telephone service during the filing season than the percent of scheduled calls indicator now used. Even though IRS reported success in meeting its returns-processing time frames, it did not achieve that success by following its plan to use the new SCRIPS equipment. IRS was only able to achieve its overall goals by rescheduling some workload back to its old manual data entry system. IRS has efforts under way to correct the SCRIPS problems. If those problems cannot be resolved, the scheduling of other forms on SCRIPS will be delayed even longer, resulting in further lost benefits the system was intended to provide. If IRS plans to continue validating SSNs and delaying refunds in 1996, we recommend that it adjusts its methodology for assessing refund timeliness to include delayed refunds associated with validly filed returns. Also after IRS develops a measure of taxpayer assistance accessibility that focuses on the number of incoming calls and/or the number of taxpayers calling for assistance, we recommend that it includes that measure among its key filing season performance indicators. We requested comments on a draft of this report from the Commissioner of Internal Revenue or her designated representative. Responsible IRS officials, including the Assistant Commissioner for Taxpayer Services, Director of Investigations (Tax Refund Fraud), Electronic Filing Executive, Director of Tax Forms and Publications, and Senior Program Analyst (Submission Processing), provided IRS’ comments in a November 13, 1995, meeting. The Chief of Taxpayer Services provided additional comments on November 30. IRS also provided a few factual clarifications that we have incorporated in this report where appropriate. The Chief of Taxpayer Services noted that IRS has emphasized the importance of having accurate SSNs on tax returns filed in 1995 by including a message on the cover of all tax packages and through many public service announcements. Our report acknowledges that fact. However, our concern is with the lack of sufficient warning to taxpayers that their refunds might still be delayed even if they had accurate SSNs on their tax returns. The Chief acknowledged that taxpayers who filed complete and accurate returns also had their refunds delayed to allow IRS additional time to verify the claims before issuing the refunds, and he said that IRS regretted any inconvenience. Officials at the November 13 meeting mentioned that there was a lot of discussion within IRS, before the 1995 filing season, about how much IRS should divulge about its plans. They also noted that by the time IRS had finalized its plans for 1995 it would have been too late to make any changes to the tax packages and Publication 17, which had already been printed. They said that even if IRS had decided to tell taxpayers more, it would have been too costly to reprint those documents. IRS said that it plans to continue validating SSNs and delaying refunds in 1996 but has revised its SSN-validation procedures and criteria. Thus, it expects that taxpayers with valid SSNs will have only a small chance of having their refunds delayed in 1996. Because of those changes, IRS saw no need to revise its methodology for assessing refund timeliness. We agree that IRS would not have to revise its methodology if those changes have the expected result of limiting the extent to which valid refunds are delayed. The officials acknowledged, however, that if that result is not achieved, the methodology would have to be adjusted. We will be monitoring the impact of IRS’ revised procedures during our assessment of the 1996 filing season. The Chief of Taxpayer Services noted that IRS has been working with us to develop appropriate measures and had proposed that the accessibility of its toll-free telephone service be measured in three ways: (1) the percentage of individual callers served; (2) the number of attempts made by successful callers, expressed in the form of a range; and (3) the disposition of all calls, whether they were answered, received a busy signal, or were abandoned. Appendix III of this report includes a discussion of IRS data on accessibility using those three measures. The Chief said that IRS would continue working with us to finalize these measures and that, given those continuing discussions, IRS felt that our recommendation was premature. We disagree. IRS has already developed measures, as indicated above, and those measures represent reasonable indicators of the accessibility of IRS’ toll-free telephone service. Our continuing discussions with IRS are not centered on the measures themselves but on the reliability of the data used for those measures. Our recommendation merely seeks a commitment from IRS that one or more of those measures, once finalized, be included among IRS’ key filing season performance indicators. We do not believe it is premature to seek that commitment. Our draft report also included two proposed recommendations that were intended to provide taxpayers with better information on potential refund delays in 1996. We proposed that if IRS planned to continue validating SSNs and delaying refunds in 1996, it (1) clearly alerts taxpayers, in the 1040 tax package and Publication 17, to the possibility that their refunds will be delayed even if there are no problems with the SSNs provided on their returns and (2) reconciles the inconsistency between those refund delays and IRS’ customer-service standard. In commenting on the proposed recommendations, IRS said that the problem we identified in 1995 with respect to adequately alerting taxpayers should not recur in 1996 because of the aforementioned changes to IRS’ SSN-validation procedures and criteria. IRS has however, revised its customer-service standard on refunds by including a caveat to alert taxpayers that their refunds may be delayed if their returns are selected for further review. The revised standard has been included in the tax packages and Publication 17 for tax year 1995 (those that taxpayers will use in preparing returns to be filed in 1996). Assuming that IRS is correct in believing that its revised procedures will cause few taxpayers with valid SSNs to have their refunds delayed, we believe that further action is unnecessary. Accordingly, we have deleted the two proposed recommendations from our final report. We are sending copies of this report to various congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, the Director of the Office of Management and Budget, and other interested parties. Major contributors to this report are listed in appendix V. Please contact me on (202) 512-9110 if you have any questions. IRS envisions that by the year 2001, 90 percent of tax payment processing will be done by lockboxes. Under this concept, which is already being used for some types of tax payments, taxpayers are to mail payments to a lockbox, which is a postal rental box serviced by a commercial bank. The bank processes the payments and transfers the funds to a federal government account. The payment and payer information is then recorded on a computer tape and forwarded to IRS where the tape is to be used to update taxpayers’ accounts on IRS’ master file. IRS conducted two lockbox tests during the 1995 filing season to assess taxpayers’ willingness to use different procedures for mailing tax payments associated with their returns. For each test, IRS sent special Form 1040 packages to specific taxpayers. These packages included (1) mailing instructions that were different for each of the two tests and (2) a payment voucher that could be scanned by optical character recognition equipment. One test package contained one return envelope with two different tear-off address labels—one label addressed to the lockbox was to be used for a return with a tax balance due, while the other label addressed to the service center was to be used for a return with a zero balance or with a refund due to the taxpayer. Taxpayers with balance-due returns were instructed to include the return, payment, and voucher in one envelope and to affix the label addressed to the lockbox. The bank that serviced the lockbox separated the return from the payment, deposited the payment, recorded the payment information on a computer tape, and forwarded the return and the computer tape to IRS for processing. The other test package used two envelopes—one addressed to the service center, the other addressed to the lockbox. All taxpayers were instructed to send their returns in the envelope addressed to the service center. Taxpayers who owed a balance were to use the second envelope to send their payments and vouchers to the lockbox. The bank processed the payment and voucher as described above. As of mid-June 1995, IRS had not yet received the management information needed to evaluate the two lockbox tests. However, IRS had already made decisions to (1) continue testing the two-label method in certain tax packages for the 1996 filing season, (2) include a voucher inside every 1996 Form 1040 tax package (except 1040A and 1040EZ), (3) instruct practitioners to send all returns with remittances, no matter what 1040 tax form they are associated with, to the lockbox, and (4) implement the two-envelope method in all 1040 packages (with the possible exception of Forms 1040A and 1040EZ) starting with the 1997 filing season. According to an IRS official, the purposes of including a standard voucher in tax packages not included in the lockbox test are (1) to familiarize taxpayers with the use of a voucher and (2) to lighten the workload being processed through the old remittance processing system (RPS) at service centers. IRS plans to use a newer system, RPSII, to scan the scannable vouchers sent to the service centers from the test tax packages. According to an IRS official, the two-envelope method will not be used for the 1996 filing season because IRS cannot easily determine if the return inside the envelope is one that involves a refund. That determination is important because IRS gives priority to refund returns to help ensure that the return gets processed and the refund gets issued before the government has to pay interest on the refund. In the past, a service center knew a return did not involve a refund if it opened the envelope and found a check inside. Under the two-envelope system, the service center only receives the tax return and thus has no quick way to isolate those returns involving payments from those involving refunds. For the 1997 filing season, IRS is considering redesigning the tax forms to help service centers more easily identify the type of return received. According to information obtained from IRS, the use of lockboxes to process remittances associated with Forms 1040 in 1995 resulted in an interest cost avoidance of about $44.3 million by getting money deposited faster through the lockbox. This means that the Treasury did not have to borrow this money to pay towards certain government obligations. At the same time, according to IRS, it cost about $3.4 million to process those remittances through the lockboxes, leaving a net savings of about $40.9 million. IRS expects that the amount of interest cost avoidance will decrease each year as the lockboxes take on higher volumes of remittances thereby slowing the banks’ productivity. As the program is expanded to all types of tax packages, volumes at the lockbox will increase while average dollar amount remitted will decrease. Bank costs associated with the larger volumes are also expected to increase. Treasury Financial Manual Bulletin No. 94-07, dated March 1, 1994, provides that if the interest cost avoidance of a lockbox’s accelerated deposits is less than the cost charged by the lockbox, the agency (in this case, IRS) is required to pay all lockbox bank charges, other than those needed to maintain a regular bank account. Otherwise Treasury’s Financial Management Service (FMS) pays the charges. Because the amount of interest cost avoidance resulting from IRS’ lockbox program has exceeded the related bank charges, FMS has paid those charges. According to an IRS National Office official responsible for the lockbox program, neither IRS nor FMS expects the amount of interest cost avoidance in the future to fall below the amount of bank charges. In 1995, IRS expanded its efforts to combat refund fraud. Much of what IRS did involved verifying SSNs, with an emphasis on returns claiming the EIC. IRS was looking for missing SSNs, SSNs that did not match the Social Security Administration’s records, and SSNs that had already been used on another return filed in 1995. As we discussed in a June 1995 testimony before the Senate Finance Committee, the expanded procedures for selecting paper returns to verify SSNs identified many problem returns, but some that should have been selected for SSN verification were not. In total, IRS identified approximately the volume of paper returns with invalid SSNs that it had expected to handle during the filing season, but volumes fluctuated widely among IRS service centers. For example, one service center received about 360 percent of its expected volume, while another received only 61 percent. As a result, service centers used somewhat different criteria for determining which taxpayers would be asked to verify SSNs and to provide additional evidence of their EIC eligibility. Computer problems also occurred during the filing season, which caused some returns not to be selected for SSN verification when they should have been. IRS also experienced some problems as it began checking for duplicate SSNs. These problems included difficulties in constructing the database to identify duplicate SSNs, poorly organized computer listings that enforcement personnel found difficult to use, and cumbersome procedures for coordinating the work of different IRS service centers. IRS is analyzing the results of the 1995 initiative and plans to make changes for 1996. Further automation of the process is a primary goal. We were not able to assess the success of IRS’ initiatives. At the time we completed our audit work, information was not yet available on such things as the number of (1) duplicate SSNs identified and resolved by IRS, (2) EIC claims adjusted or withdrawn after IRS questioned a taxpayer about an SSN, or (3) erroneous SSNs corrected as a result of IRS’ efforts. Some information was available, however, that sheds light on the results of IRS’ efforts. According to IRS: As a result of the 6-to-8 week delay on EIC refunds, IRS was able to stop an additional $6 million in fraudulent refund claims that, in past years, would have been issued before IRS had detected the fraud. IRS had received 18.9 million EIC claims as of the end of September 1995, compared with 14.8 million claims at the same time in 1994. All of that increase was due to a legislative change that made persons without qualifying children eligible for the credit in 1995. IRS had expected to receive about 20 million claims in 1995, including about 5.3 million from persons without qualifying children. EIC claims in 1995 totaled about $20.9 billion as of September 30 compared with about $15.2 billion as of October 1, 1994. Only about 12 percent of that increase was attributed to claims from taxpayers with no qualifying children. As a result of IRS’ scrutiny of EIC claims, 3.2 million taxpayers received their refunds in two checks because the EIC portion of their refund was temporarily delayed. IRS tracked 400 returns that had been rejected by the electronic filing system, and found, among other things, that 113 (28 percent) of the individuals involved subsequently filed on paper, using the same SSN that had been rejected by the electronic filing system, and were issued a refund. To assess the ability of taxpayers to reach IRS by telephone to ask a question about the tax law or their accounts or to order forms or publications, we conducted two tests—one of IRS’ toll-free telephone assistance system and the other of IRS’ toll-free form-ordering system. To conduct the tests, we placed telephone calls at various times during each workday from January 30 through February 11 and from April 3 through April 15, 1995. We made our calls from seven metropolitan areas—Atlanta; Chicago; Cincinnati; Kansas City; New York; San Francisco; and Washington, D.C. Each attempt to contact IRS consisted of up to five calls at 1-minute intervals. If we reached IRS during any of the five calls and made contact with an assistor, we considered the attempt successful. If we reached IRS during any of the five calls but were put on hold for more than 7 minutes without talking to an assistor, we abandoned the call, did not dial again, and considered the attempt unsuccessful. If we received a busy signal, we hung up, waited 1 minute, and then redialed. If after four redials (five calls in total) we had not reached IRS, we considered the attempt unsuccessful. We tested the accessibility of the toll-free telephone assistance system IRS tells taxpayers to call if they have a question about their account, the tax law, or IRS procedures. Of 745 attempts to contact an assistor, 249 (33 percent) were successful—87 on the first call, 55 on the second call, and 107 after 3 to 5 calls. In another 89 cases (12 percent), we got into IRS’ system but were put on hold for more than 7 minutes and thus hung up before making contact with an assistor. The remaining 407 attempts (55 percent) were aborted after we received busy signals on each of our 5 dialings. Our 745 attempts to contact an assistor required a total of 2,821 calls to IRS’ toll-free telephone number. Of those 2,821 calls, we succeeded in getting through to an IRS assistor 249 times—a 9-percent accessibility rate. In conducting our test, we did not ask questions of the assistors because it was not our intent to assess the accuracy of their assistance. IRS does its own test of accuracy, and we have assured ourselves in the past about the reliability of IRS’ methodology. IRS’ test data for 1995 showed an accuracy rate of 90.1 percent as of April 15, 1995. That compares with a rate of 89 percent for the same period in 1994. One way taxpayers can obtain tax forms and publications is to place an order through IRS’ telephone form-ordering system. The order will then be filled by one of IRS’ three forms distribution centers. To determine the level of service IRS provides to taxpayers trying to access this ordering system, we conducted another test using the same procedures used for the first test. Our results showed that the form-ordering system was much more accessible than the toll-free telephone assistance system. However, there was still much room for improvement. Of 484 attempts to contact a distribution center representative, 443 (91.5 percent) were successful—299 on the first call, 76 on the second call, and 68 after 3 to 5 calls—and 41 (8.5 percent) were aborted after five dialings. We did not abandon any calls when placed on hold because we were not held waiting for more than 7 minutes. Our 484 attempts to contact a representative required 883 calls. Of those 883 calls, we succeeded in getting through to an IRS representative 443 times—a 50-percent accessibility rate. As with the first test, our intent was to determine how easy it was to reach IRS over the telephone. We did not assess how well the distribution centers filled orders for tax forms and publications because (1) our checks in recent years showed that IRS was doing a good job of filling orders, (2) IRS contracts for its own test of distribution center performance, and (3) our prior review of the contractor’s methodology resulted in changes that have improved its reliability. The contractor measures the length of time from when an order is placed until the contractor receives notification about that order (either by full or partial receipt of the material ordered or notification that the material has been back ordered). The contractor also measures accuracy by comparing the items ordered with those received. The contractor’s results for the first part of the fiscal year showed that (1) it took the distribution centers an average of 16 days to fill an order, which is within IRS’ stated time frame of 9 to 21 days and (2) 97.9 percent of the orders were filled correctly, which exceeded IRS’ goal of 96.5 percent. We have been working with representatives from the Department of the Treasury and IRS to develop a better way to measure the accessibility of IRS’ telephone service. Although there are still some issues to be resolved, such as how to best measure the number of times a caller had to dial before reaching an assistor, the data compiled by IRS for 1995 confirmed the results of our tests. “For the period January 1, 1995, to April 15, 1995, an estimated 46.9 million callers made 236.1 million call attempts to IRS for assistance. This equates to an average of 5 attempts per caller. We answered 19.2 million calls which represents 41 percent of the callers. Of the 19.2 million callers who received an answer, 50 percent were answered within approximately 1 attempt; 75 percent were answered within approximately 5 attempts.” “Of the 236.1 million attempts, 19.2 million received an answer, which represents 8 percent of the total attempts. The remaining 216.9 million call attempts either received busy signals or were terminated by the callers because they did not want to wait in queue for an assistor.” As shown in figure III.1, IRS’ reported accessibility rate of 8 percent continued a downward trend since 1989 and was 13 percentage points below 1994. However, the 1995 accuracy rate on answers to tax law questions continued an upward trend. SCRIPS is a multimillion dollar system designed to process income tax returns filed on Form 1040EZ and other IRS documents by electronically scanning the document, capturing the data, and storing an image of the scanned document. SCRIPS was tested in Cincinnati in 1994 and used in five processing centers—Austin, Cincinnati, Kansas City, Memphis, and Ogden—in 1995. In conjunction with the implementation of SCRIPS, IRS consolidated the processing of IRP documents at the five SCRIPS centers and FTD coupons at four of the five SCRIPS centers. IRS continued to process forms 1040EZ at all 10 service centers but planned to consolidate 1040EZ processing in the five SCRIPS centers by 1996. IRS planned to start processing all forms 941 received at the five SCRIPS centers in July 1995 and redistribute 100 percent of the forms 941 workload from non-SCRIPS centers by 1996. IRS planned to process 76.4 million FTDs, 57.4 million IRP documents, and 8.6 million forms 1040EZ on SCRIPS during the 1995 filing season. IRS expected that SCRIPS would provide faster and more accurate document processing, lower maintenance costs, reduce manual data entry, lessen error correction, and minimize document storage requirements. But, extensive downtime and slower-than-expected processing rates during the filing season limited the effectiveness of SCRIPS. The impact of these problems was most felt in the processing of forms 1040EZ. Some centers stopped 1040EZ processing on SCRIPS completely or for extended periods of time. As a result, IRS was able to process only about 56 percent of the expected 8.6 million forms 1040EZ on SCRIPS. Although the centers were able to process the rest of the forms 1040EZ on their old systems, doing so required additional resources and costs, and some centers reported that the average time it took to process a return increased because of the SCRIPS problems. Processing center officials told us of budget overruns as a result of slower-than-expected SCRIPS processing times. IRS had scheduled 25.6 staff years for processing other-than-full-paid forms 1040EZ but used 66.5 staff years. During the 1995 filing season, IRS processed 64 forms 1040EZ per hour, 28 percent slower than the 89 documents per hour processed in 1994 on the systems that SCRIPS replaced. An official at one processing center told us that as a result of the problems with SCRIPS, the center had to (1) delay furloughing seasonal staff, (2) work 2 additional weekends of overtime (about 18,000 additional overtime hours) to get returns processed within established time frames, (3) reinstall old optical character recognition equipment and add additional terminals at a cost of about $4,300, and (4) train 163 additional employees on how to use the old processing systems. IRS’ Internal Audit issued a report on IRS’ 1994 SCRIPS test that cited several factors that may have contributed to the problems encountered in 1995.Internal Audit found that (1) SCRIPS had not been fully tested to meet output and storage requirements, (2) IRS accepted the system without conducting required acceptance and equipment testing, and (3) SCRIPS was not meeting contractual requirements for capturing Form 1040EZ and IRP data accurately. Had IRS conducted the proper testing, many of the problems encountered during the 1995 filing season might have been identified and corrected before system implementation. At the conclusion of our audit work, IRS was assessing SCRIPS performance to identify problem causes and needed corrective action. In the meantime, IRS postponed plans to process Form 941 on SCRIPS and redistribute 1040EZ workload from the five centers that do not have SCRIPS. We will be monitoring IRS’ efforts to improve SCRIPS performance, especially as they affect IRS’ readiness for the 1996 filing season. Doris J. Hynes, Evaluator-in-Charge H. Yong Meador, Evaluator Marge Vallazza, Reports Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Internal Revenue Service's (IRS) performance during the 1995 tax filing season, focusing on the: (1) processing of individual income tax returns and refunds; (2) ability of taxpayers to reach IRS by telephone; and (3) new IRS computer system for processing returns. GAO found that: (1) IRS delayed about 7 million refunds for up to 8 weeks because of systematic checks for questionable refund claims; (2) IRS delayed refunds on returns with missing or invalid social security numbers (SSN) to identify duplicate uses of the same SSN and fraud schemes; (3) IRS could have alleviated negative publicity about the delays had it better forewarned taxpayers about the potential for delays; (4) although IRS answered 11 percent more calls from taxpayers in the 1995 filing season, the chance of reaching an IRS assistor was not very good; (5) various IRS processing and accessibility goals mask problems in its actual performance; and (6) IRS experienced numerous problems with its electronic image computer system, including extensive downtime and slow processing rates. |
The electricity industry, as shown in figure 1, is composed of four distinct functions: generation, transmission, distribution, and system operations. Once electricity is generated—whether by burning fossil fuels; through nuclear fission; or by harnessing wind, solar, geothermal, or hydro energy—it is generally sent through high-voltage, high-capacity transmission lines to local electricity distributors. Once there, electricity is transformed into a lower voltage and sent through local distribution lines for consumption by industrial plants, businesses, and residential consumers. Because electric energy is generated and consumed almost instantaneously, the operation of an electric power system requires that a system operator constantly balance the generation and consumption of power. Utilities own and operate electricity assets, which may include generation plants, transmission lines, distribution lines, and substations—structures often seen in residential and commercial areas that contain technical equipment such as switches and transformers to ensure smooth, safe flow of current and regulate voltage. Utilities may be owned by investors, municipalities, and individuals (as in cooperative utilities). System operators—sometimes affiliated with a particular utility or sometimes independent and responsible for multiple utility areas—manage the electricity flows. These system operators manage and control the generation, transmission, and distribution of electric power using control systems—IT- and network-based systems that monitor and control sensitive processes and physical functions, including opening and closing circuit breakers. As we have previously reported, the effective functioning of the electricity industry is highly dependent on these control systems. However, for many years, aspects of the electricity network lacked (1) adequate technologies—such as sensors—to allow system operators to monitor how much electricity was flowing on distribution lines, (2) communications networks to further integrate parts of the electricity grid with control centers, and (3) computerized control devices to automate system management and recovery. As the electricity industry has matured and technology has advanced, utilities have begun taking steps to update the electricity grid—the transmission and distribution systems—by integrating new technologies and additional IT systems and networks. Though utilities have regularly taken such steps in the past, industry and government stakeholders have begun to articulate a broader, more integrated vision for transforming the electricity grid into one that is more reliable and efficient; facilitates alternative forms of generation, including renewable energy; and gives consumers real-time information about fluctuating energy costs. This vision—the smart grid—would increase the use of IT systems and networks and two-way communication to automate actions that system operators formerly had to make manually. Electricity grid modernization is an ongoing process, and initiatives have commonly involved installing advanced metering infrastructure (smart meters) on homes and commercial buildings that enable two-way communication between the utility and customer. Other initiatives include adding “smart” components to provide the system operator with more detailed data on the conditions of the transmission and distribution systems and better tools to observe the overall condition of the grid (referred to as “wide-area situational awareness”). These include advanced, smart switches on the distribution system that communicate with each other to reroute electricity around a troubled line and high-resolution, time-synchronized monitors—called phasor measurement units—on the transmission system. The use of smart grid systems may have a number of benefits, including improved reliability from fewer and shorter outages, downward pressure on electricity rates resulting from the ability to shift peak demand, an improved ability to shift to alternative sources of energy, and an improved ability to detect and respond to potential attacks on the grid. Both the federal government and state governments have authority for overseeing the electricity industry. For example, the Federal Energy Regulatory Commission (FERC) regulates rates for wholesale electricity sales and transmission of electricity in interstate commerce. This includes approving whether to allow utilities to recover the costs of investments they make to the transmission system, such as smart grid investments. Meanwhile, local distribution and retail sales of electricity are generally subject to regulation by state public utility commissions. State and federal authorities also play key roles in overseeing the reliability of the electric grid. State regulators generally have authority to oversee the reliability of the local distribution system. The North American Electric Reliability Corporation (NERC) is the federally designated U.S. Electric Reliability Organization, and is overseen by FERC. NERC has responsibility for conducting reliability assessments and developing and enforcing mandatory standards to ensure the reliability of the bulk power system—i.e., facilities and control systems necessary for operating the transmission network and certain generation facilities needed for reliability. NERC develops reliability standards collaboratively through a deliberative process involving utilities and others in the industry, which are then sent to FERC for approval. These standards include critical infrastructure protection standards for protecting electric utility-critical and cyber-critical assets. FERC has responsibility for reviewing and approving the reliability standards or directing NERC to modify them. In addition, the Energy Independence and Security Act of 2007established federal policy to support the modernization of the electricity grid and required actions by a number of federal agencies, including the National Institute of Standards and Technology (NIST), FERC, and the Department of Energy. With regard to cybersecurity, the act required NIST and FERC to take the following actions: NIST was to coordinate development of a framework that includes protocols and model standards for information management to achieve interoperability of smart grid devices and systems. As part of its efforts to accomplish this, NIST planned to identify cybersecurity standards for these systems and also identified the need to develop guidelines for organizations such as electric companies on how to securely implement smart grid systems. In January 2011, we reported that NIST had identified 11 standards involving cybersecurity that support smart grid interoperability and had issued a first version of a cybersecurity guideline. FERC was to adopt standards resulting from NIST’s efforts that it deemed necessary to ensure smart grid functionality and interoperability. However, according to FERC officials, the statute did not provide specific additional authority to allow FERC to require utilities or manufacturers of smart grid technologies to follow these standards. As a result, any standards identified and developed through the NIST-led process are voluntary unless regulators use other authorities to indirectly compel utilities and manufacturers to follow them. Threats to systems supporting critical infrastructure—which includes the electricity industry and its transmission and distribution systems—are evolving and growing. In February 2011, the Director of National Intelligence testified that, in the past year, there had been a dramatic increase in malicious cyber activity targeting U.S. computers and networks, including a more than tripling of the volume of malicious software since 2009. Different types of cyber threats from numerous sources may adversely affect computers, software, networks, organizations, entire industries, or the Internet. Cyber threats can be unintentional or intentional. Unintentional threats can be caused by software upgrades or maintenance procedures that inadvertently disrupt systems. Intentional threats include both targeted and untargeted attacks from a variety of sources, including criminal groups, hackers, disgruntled employees, foreign nations engaged in espionage and information warfare, and terrorists. Table 1 shows common sources of cyber threats. These sources of cyber threats make use of various techniques, or exploits that may adversely affect computers, software, a network, an organization’s operation, an industry, or the Internet itself. Table 2 shows common types of cyber exploits. The potential impact of these threats is amplified by the connectivity between information systems, the Internet, and other infrastructures, creating opportunities for attackers to disrupt critical services, including electrical power. In addition, the increased reliance on IT systems and networks also exposes the electric grid to potential and known cybersecurity vulnerabilities. These vulnerabilities include an increased number of entry points and paths that can be exploited by potential adversaries and other unauthorized users; the introduction of new, unknown vulnerabilities due to an increased use of new system and network technologies; wider access to systems and networks due to increased connectivity; an increased amount of customer information being collected and transmitted, providing incentives for adversaries to attack these systems and potentially putting private information at risk of unauthorized disclosure and use. In May 2008, we reported that the corporate network of the Tennessee Valley Authority—the nation’s largest public power company, which generates and distributes power in an area of about 80,000 square miles in the southeastern United States—contained security weaknesses that could lead to the disruption of control systems networks and devices connected to that network. We made 19 recommendations to improve the implementation of information security program activities for the control systems governing the Tennessee Valley Authority’s critical infrastructures and 73 recommendations to address specific weaknesses in security controls. The Tennessee Valley Authority concurred with the recommendations and has taken steps to implement them. We and others have also reported that smart grid and related systems have known cyber vulnerabilities. For example, cybersecurity experts have demonstrated that certain smart meters can be successfully attacked, possibly resulting in disruption to the electricity grid. In addition, we have reported that control systems used in industrial settings such as electricity generation have vulnerabilities that could result in serious damages and disruption if exploited. Further, in 2007, the Department of Homeland Security, in cooperation with the Department of Energy, ran a test that demonstrated that a vulnerability commonly referred to as “Aurora” had the potential to allow unauthorized users to remotely control, misuse, and cause damage to a small commercial electric generator. Moreover, in 2008, the Central Intelligence Agency reported that malicious activities against IT systems and networks have caused disruption of electric power capabilities in multiple regions overseas, including a case that resulted in a multicity power outage. As government, private sector, and personal activities continue to move to networked operations, the threat will continue to grow. Cyber incidents continue to affect the electricity industry. For example, the Department of Homeland Security’s Industrial Control Systems Cyber Emergency Response Team recently noted that the number of reported cyber incidents affecting control systems of companies in the electricity sector increased from 3 in 2009 to 25 in 2011. In addition, we and others have reported that cyber incidents can affect the operations of energy facilities, as the following examples illustrate: Smart meter attacks. In April 2012, it was reported that sometime in 2009 an electric utility asked the FBI to help it investigate widespread incidents of power thefts through its smart meter deployment. The report indicated that the miscreants hacked into the smart meters to change the power consumption recording settings using software available on the Internet. Phishing attacks directed at energy sector. The Department of Homeland Security’s Industrial Control Systems Cyber Emergency Response Team reported that, in 2011, it deployed incident response teams to an electric bulk provider and an electric utility that had been victims of broader phishing attacks. The team found three malware samples and detected evidence of a sophisticated threat actor. Stuxnet. In July 2010, a sophisticated computer attack known as Stuxnet was discovered. It targeted control systems used to operate industrial processes in the energy, nuclear, and other critical sectors. It is designed to exploit a combination of vulnerabilities to gain access to its target and modify code to change the process. Browns Ferry power plant. In August 2006, two circulation pumps at Unit 3 of the Browns Ferry, Alabama, nuclear power plant failed, forcing the unit to be shut down manually. The failure of the pumps was traced to excessive traffic on the control system network, possibly caused by the failure of another control system device. Northeast power blackout. In August 2003, failure of the alarm processor in the control system of FirstEnergy, an Ohio-based electric utility, prevented control room operators from having adequate situational awareness of critical operational changes to the electrical grid. When several key transmission lines in northern Ohio tripped due to contact with trees, they initiated a cascading failure of 508 generating units at 265 power plants across eight states and a Canadian province. Davis-Besse power plant. The Nuclear Regulatory Commission confirmed that in January 2003, the Microsoft SQL Server worm known as Slammer infected a private computer network at the idled Davis-Besse nuclear power plant in Oak Harbor, Ohio, disabling a safety monitoring system for nearly 5 hours. In addition, the plant’s process computer failed, and it took about 6 hours for it to become available again. Multiple entities have taken steps to help secure the electricity grid, including NERC, NIST, FERC, and the Departments of Homeland Security and Energy. NERC has performed several activities that are intended to secure the grid. It has developed eight critical infrastructure standards for protecting electric utility-critical and cyber-critical assets. The standards established requirements for the following key cybersecurity-related controls: critical cyber asset identification, security management controls, personnel and training, electronic “security perimeters,” physical security of critical cyber assets, systems security management, incident reporting and response planning, and recovery plans for critical cyber assets. In December 2011, we reported that NERC’s eight cyber security standards, along with supplementary documents, were substantially similar to NIST guidance applicable to federal agencies. NERC also has published security guidelines for companies to consider for protecting electric infrastructure systems, although such guidelines are voluntary and typically not checked for compliance. For example, NERC’s June 2010 Security Guideline for the Electricity Sector: Identifying Critical Cyber Assets is intended to assist entities in identifying and developing a list of critical cyber assets as described in the mandatory standards. NERC also has enforced compliance with mandatory cybersecurity standards through its Compliance Monitoring and Enforcement Program, subject to FERC review. NERC has assessed monetary penalties for violations of its cyber security standards. NIST, in implementing its responsibilities under the Energy Independence and Security Act of 2007 with regard to standards to achieve interoperability of smart grid systems, planned to identify cybersecurity standards for these systems. In January 2011, we reported that it had identified 11 standards involving cybersecurity that support smart grid interoperability and had issued a first version of a cybersecurity guideline. NIST’s cybersecurity guidelines largely addressed key cybersecurity elements, such as assessment of cybersecurity risks and identification of security requirements (i.e., controls); however, its guidelines did not address an important element essential to securing smart grid systems—the risk of attacks using both cyber and physical means. NIST officials said that they intended to update the guidelines to address this and other missing elements they identified, but their plan and schedule for doing so were still in draft form. We recommended that NIST finalize its plan and schedule for incorporating missing elements, and NIST officials agreed. We are currently working with officials to determine the status of their efforts to address these recommendations. FERC also has taken several actions to help secure the electricity grid. For example, it reviewed and approved NERC’s eight critical infrastructure protection standards in 2008. Since then, in its role of overseeing the development of reliability standards, the commission has directed NERC to make numerous changes to standards to improve cybersecurity protections. However, according to the FERC Chairman’s February 2012 letter in response to our report on electricity grid modernization, many of the outstanding directives have not been incorporated into the latest versions of the standards. The Chairman added that the commission would continue to work with NERC to incorporate the directives. In addition, FERC has authorized NERC to enforce mandatory reliability standards for the bulk power system, while retaining its authority to enforce the same standards and assess penalties for violations. We reported in January 2011 that FERC also had begun reviewing initial smart grid standards identified as part of NIST efforts. However, in July 2011, the commission declined to adopt the initial smart grid standards identified as a part of the NIST efforts, finding that there was insufficient consensus to do so. The Department of Homeland Security has been designated by federal policy as the principal federal agency to lead, integrate, and coordinate the implementation of efforts to protect cyber-critical infrastructures and key resources. Under this role, the Department’s National Cyber Security Division’s Control Systems Security Program has issued recommended practices to reduce risks to industrial control systems within and across all critical infrastructure and key resources sectors, including the electricity subsector. For example, in April 2011, the program issued the Catalog of Control Systems Security: Recommendations for Standards Developers, which is intended to provide a detailed listing of recommended controls from several standards related to control systems. The program also manages and operates the Industrial Control Systems Cyber Emergency Response Team to respond to and analyze control-systems-related incidents, provide onsite support for incident response and forensic analysis, provide situational awareness in the form of actionable intelligence, and share and coordinate vulnerability information and threat analysis through information products and alerts. For example, it reported providing on-site assistance to six companies in the electricity subsector, including a bulk electric power provider and multiple electric utilities, during 2009-2011. The Department of Energy is the lead federal agency which is responsible for coordinating critical infrastructure protection efforts with the public and private stakeholders in the energy sector, including the electricity subsector. In this regard, we have reported that officials from the Department’s Office of Electricity Delivery and Energy Reliability stated that the department was involved in efforts to assist the electricity sector in the development, assessment, and sharing of cybersecurity standards. For example, the department was working with NIST to enable state power producers to use current cybersecurity guidance. In May 2012, the department released the Electricity Subsector Cybersecurity Risk Management Process. The guideline is intended to ensure that cybersecurity risks for the electric grid are addressed at the organization, mission or business process, and information system levels. We have not evaluated this guide. In our January 2011 report, we identified a number of key challenges that industry and government stakeholders faced in ensuring the cybersecurity of the systems and networks that support our nation’s electricity grid.These included the following: There was a lack of a coordinated approach to monitor whether industry follows voluntary standards. As mentioned above, under the Energy Independence and Security Act of 2007, FERC is responsible for adopting cybersecurity and other standards that it deems necessary to ensure smart grid functionality and interoperability. However, FERC had not developed an approach coordinated with other regulators to monitor, at a high level, the extent to which industry will follow the voluntary smart grid standards it adopts. There had been initial efforts by regulators to share views, through, for example, a collaborative dialogue between FERC and the National Association of Regulatory Utility Commissioners, which had discussed the standards-setting process in general terms. Nevertheless, according to officials from FERC and the National Association of Regulatory Utility Commissioners, FERC and the state public utility commissions had not established a joint approach for monitoring how widely voluntary smart grid standards are followed in the electricity industry or developed strategies for addressing any gaps. Moreover, FERC had not coordinated in such a way with groups representing public power or cooperative utilities, which are not routinely subject to FERC’s or the states’ regulatory jurisdiction for rate setting. We noted that without a good understanding of whether utilities and manufacturers are following smart grid standards, it would be difficult for FERC and other regulators to know whether a voluntary approach to standards setting is effective or if changes are needed. Aspects of the current regulatory environment made it difficult to ensure the cybersecurity of smart grid systems. In particular, jurisdictional issues and the difficulties associated with responding to continually evolving cyber threats were a key regulatory challenge to ensuring the cybersecurity of smart grid systems as they are deployed. Regarding jurisdiction, experts we spoke with expressed concern that there was a lack of clarity about the division of responsibility between federal and state regulators, particularly regarding cybersecurity. While jurisdictional responsibility has historically been determined by whether a technology is located on the transmission or distribution system, experts raised concerns that smart grid technology may blur these lines. For example, devices such as smart meters deployed on parts of the grid traditionally subject to state jurisdiction could, in the aggregate, have an impact on those parts of the grid that federal regulators are responsible for— namely the reliability of the transmission system. There was also concern about the ability of regulatory bodies to respond to evolving cybersecurity threats. For example, one expert questioned the ability of government agencies to adapt to rapidly evolving threats, while another highlighted the need for regulations to be capable of responding to the evolving cybersecurity issues. In addition, our experts expressed concern with agencies developing regulations in the future that are overly specific in their requirements, such as those specifying the use of a particular product or technology. Consequently, unless steps are taken to mitigate these challenges, regulations may not be fully effective in protecting smart grid technology from cybersecurity threats. Utilities were focusing on regulatory compliance instead of comprehensive security. The existing federal and state regulatory environment creates a culture within the utility industry of focusing on compliance with cybersecurity requirements, instead of a culture focused on achieving comprehensive and effective cybersecurity. Specifically, experts told us that utilities focus on achieving minimum regulatory requirements rather than designing a comprehensive approach to system security. In addition, one expert stated that security requirements are inherently incomplete, and having a culture that views the security problem as being solved once those requirements are met will leave an organization vulnerable to cyber attack. Consequently, without a comprehensive approach to security, utilities leave themselves open to unnecessary risk. There was a lack of security features built into smart grid systems. Security features are not consistently built into smart grid devices. For example, experts told us that certain currently available smart meters had not been designed with a strong security architecture and lacked important security features, including event logging and forensics capabilities that are needed to detect and analyze attacks. In addition, our experts stated that smart grid home area networks—used for managing the electricity usage of appliances and other devices in the home—did not have adequate security built in, thus increasing their vulnerability to attack. Without securely designed smart grid systems, utilities may lack the capability to detect and analyze attacks, increasing the risk that attacks will succeed and utilities will be unable to prevent them from recurring. The electricity industry did not have an effective mechanism for sharing information on cybersecurity and other issues. The electricity industry lacked an effective mechanism to disclose information about cybersecurity vulnerabilities, incidents, threats, lessons learned, and best practices in the industry. For example, our experts stated that while the electricity industry has an information sharing center, it did not fully address these information needs. In addition, President Obama’s May 2009 cyberspace policy review also identified challenges related to cybersecurity information sharing within the electric and other critical infrastructure sectors and issued recommendations to address them. According to our experts, information regarding incidents such as both unsuccessful and successful attacks must be able to be shared in a safe and secure way to avoid publicly revealing the reported organization and penalizing entities actively engaged in corrective action. Such information sharing across the industry could provide important information regarding the level of attempted cyber attacks and their methods, which could help grid operators better defend against them. If the industry pursued this end, it could draw upon the practices and approaches of other industries when designing an industry-led approach to cybersecurity information sharing. Without quality processes for information sharing, utilities will not have the information needed to adequately protect their assets against attackers. The electricity industry did not have metrics for evaluating cybersecurity. The electricity industry was also challenged by a lack of cybersecurity metrics, making it difficult to measure the extent to which investments in cybersecurity improve the security of smart grid systems. Experts noted that while such metrics are difficult to develop, they could help compare the effectiveness of competing solutions and determine what mix of solutions combine to make the most secure system. Furthermore, our experts said that having metrics would help utilities develop a business case for cybersecurity by helping to show the return on a particular investment. Until such metrics are developed, there is increased risk that utilities will not invest in security in a cost-effective manner, or have the information needed to make informed decisions on their cybersecurity investments. To address these challenges, we made recommendations in our January 2011 report. To improve coordination among regulators and help Congress better assess the effectiveness of the voluntary smart grid standards process, we recommended that the Chairman of FERC develop an approach to coordinate with state regulators and with groups that represent utilities subject to less FERC and state regulation to (1) periodically evaluate the extent to which utilities and manufacturers are following voluntary interoperability and cybersecurity standards and (2) develop strategies for addressing any gaps in compliance with standards that are identified as a result of this evaluation. We also recommended that FERC, working with NERC as appropriate, assess whether commission efforts should address any of the cybersecurity challenges identified in our report. FERC agreed with these recommendations. Although FERC agreed with these recommendations, they have not yet been implemented. According to the FERC Chairman, given the continuing evolution of standards and the lack of sufficient consensus for regulatory adoption, commission staff believe that coordinated monitoring of compliance with standards would be premature at this time, and that this may change as new standards are developed and deployed in industry. We believe that it is still important for FERC to improve coordination among regulators and that consensus is reached on standards. We will continue to monitor the status of its efforts to address these recommendations. In summary, the evolving and growing threat from cyber-based attacks highlights the importance of securing the electricity industry’s systems and networks. A successful attack could result in widespread power outages, significant monetary costs, damage to property, and loss of life. The roles of NERC and FERC remain critical in approving and disseminating cybersecurity guidance and enforcing standards, as appropriate. Moreover, more needs to be done to meet challenges facing the industry in enhancing security, particularly as the generation, transmission, and distribution of electricity comes to rely more on emerging and sophisticated technology. Chairman Bingaman, Ranking Member Murkowski, and Members of the Committee, this concludes my statement. I would be happy to answer any questions you may have at this time. If you have any questions regarding this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected] or David C. Trimble, Director, Natural Resources and Environment Team, at (202) 512-3841 or [email protected]. Other key contributors to this statement include Michael Gilmore, Anjalique Lawrence, and Jon R. Ludwigson (Assistant Directors), Paige Gilbreath, Barbarol James, Lee McCracken, and Dana Pon. Cybersecurity: Threats Impacting the Nation. GAO-12-666T. Washington, D.C.: April 24, 2012. Cybersecurity: Challenges in Securing the Modernized Electricity Grid, GAO-12-507T. Washington, D.C.: February 28, 2012. Critical Infrastructure Protection: Cybersecurity Guidance Is Available, but More Can Be Done to Promote Its Use. GAO-12-92. Washington, D.C.: December 9, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Electricity Grid Modernization: Progress Being Made on Cybersecurity Guidelines, but Key Challenges Remain to Be Addressed. GAO-11-117. Washington, D.C.: January 12, 2011. Cybersecurity: Continued Attention Needed to Protect Our Nation's Critical Infrastructure. GAO-11-865T. Washington, D.C.: July 26, 2011. Critical Infrastructure Protection: Key Private and Public Cyber Expectations Need to Be Consistently Addressed. GAO-10-628. Washington, D.C.: July 15, 2010. Cyberspace: United States Faces Challenges in Addressing Global Cybersecurity and Governance. GAO-10-606. Washington, D.C.: July 2, 2010. Cybersecurity: Continued Attention Is Needed to Protect Federal Information Systems from Evolving Threats. GAO-10-834T. Washington, D.C.: June 16, 2010. Critical Infrastructure Protection: Update to National Infrastructure Protection Plan Includes Increased Emphasis on Risk Management and Resilience. GAO-10-296. Washington, D.C.: March 5, 2010. Cybersecurity: Progress Made but Challenges Remain in Defining and Coordinating the Comprehensive National Initiative. GAO-10-338. Washington, D.C.: March 5, 2010. Cybersecurity: Continued Efforts Are Needed to Protect Information Systems from Evolving Threats. GAO-10-230T. Washington, D.C.: November 17, 2009. Defense Critical Infrastructure: Actions Needed to Improve the Identification and Management of Electrical Power Risks and Vulnerabilities to DOD Critical Assets. GAO-10-147. Washington, D.C.: October 23, 2009. Critical Infrastructure Protection: Current Cyber Sector-Specific Planning Approach Needs Reassessment. GAO-09-969. Washington, D.C.: September 24, 2009. National Cybersecurity Strategy: Key Improvements Are Needed to Strengthen the Nation’s Posture. GAO-09-432T. Washington, D.C.: March 10, 2009. Electricity Restructuring: FERC Could Take Additional Steps to Analyze Regional Transmission Organizations’ Benefits and Performance. GAO-08-987. Washington, D.C.: September 22, 2008. Information Security: TVA Needs to Address Weaknesses in Control Systems and Networks. GAO-08-526. Washington, D.C.: May 21, 2008. Critical Infrastructure Protection: Multiple Efforts to Secure Control Systems Are Under Way, but Challenges Remain. GAO-07-1036. Washington, D.C.: September 10, 2007. Cybercrime: Public and Private Entities Face Challenges in Addressing Cyber Threats. GAO-07-705. Washington, D.C.: June 22, 2007. Meeting Energy Demand in the 21st Century: Many Challenges and Key Questions. GAO-05-414T. Washington, D.C.: March 16, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The electric power industry is increasingly incorporating information technology (IT) systems and networks into its existing infrastructure (e.g., electricity networks, including power lines and customer meters). This use of IT can provide many benefits, such as greater efficiency and lower costs to consumers. However, this increased reliance on IT systems and networks also exposes the grid to cybersecurity vulnerabilities, which can be exploited by attackers. Moreover, GAO has identified protecting systems supporting our nations critical infrastructure (which includes the electricity grid) as a governmentwide high-risk area. GAO was asked to testify on the status of actions to protect the electricity grid from cyber attacks. Accordingly, this statement discusses (1) cyber threats facing cyber-reliant critical infrastructures, which include the electricity grid, and (2) actions taken and challenges remaining to secure the grid against cyber attacks. In preparing this statement, GAO relied on previously published work in this area and reviewed reports from other federal agencies, media reports, and other publicly available sources. The threats to systems supporting critical infrastructures are evolving and growing. In testimony, the Director of National Intelligence noted a dramatic increase in cyber activity targeting U.S. computers and systems, including a more than tripling of the volume of malicious software. Varying types of threats from numerous sources can adversely affect computers, software, networks, organizations, entire industries, and the Internet itself. These include both unintentional and intentional threats, and may come in the form of targeted or untargeted attacks from criminal groups, hackers, disgruntled employees, nations, or terrorists. The interconnectivity between information systems, the Internet, and other infrastructures can amplify the impact of these threats, potentially affecting the operations of critical infrastructures, the security of sensitive information, and the flow of commerce. Moreover, the electricity grids reliance on IT systems and networks exposes it to potential and known cybersecurity vulnerabilities, which could be exploited by attackers. The potential impact of such attacks has been illustrated by a number of recently reported incidents and can include fraudulent activities, damage to electricity control systems, power outages, and failures in safety equipment. To address such concerns, multiple entities have taken steps to help secure the electricity grid, including the North American Electric Reliability Corporation, the National Institute of Standards and Technology (NIST), the Federal Energy Regulatory Commission, and the Departments of Homeland Security and Energy. These include, in particular, establishing mandatory and voluntary cybersecurity standards and guidance for use by entities in the electricity industry. For example, the North American Electric Reliability Corporation and the Federal Energy Regulatory Commission, which have responsibility for regulation and oversight of part of the industry, have developed and approved mandatory cybersecurity standards and additional guidance. In addition, NIST has identified cybersecurity standards that support smart grid interoperability and has issued a cybersecurity guideline. The Departments of Homeland Security and Energy have also played roles in disseminating guidance on security practices and providing other assistance. As GAO previously reported, there were a number of ongoing challenges to securing electricity systems and networks. These include: A lack of a coordinated approach to monitor industry compliance with voluntary standards. Aspects of the current regulatory environment made it difficult to ensure the cybersecurity of smart grid systems. A focus by utilities on regulatory compliance instead of comprehensive security. A lack of security features consistently built into smart grid systems. The electricity industry did not have an effective mechanism for sharing information on cybersecurity and other issues. The electricity industry did not have metrics for evaluating cybersecurity. In a prior report, GAO has made recommendations related to electricity grid modernization efforts, including developing an approach to monitor compliance with voluntary standards. These recommendations have not yet been implemented. |
The DRC is a vast, mineral-rich nation with an estimated population of about 75 million people and an area that is roughly one-quarter the size of the United States, according to the UN. The map in figure 1 shows the DRC’s provinces and adjoining countries. Since its independence in 1960, the DRC has undergone political upheaval, including a civil war, according to State. In particular, eastern DRC has continued to be plagued by violence, often perpetrated against civilians by illegal armed groups and some members of the Congolese national military. In November 2012, M-23, an illegal armed group, occupied the city of Goma and other cities in eastern DRC and clashed with the Congolese national army. During this time, the UN reported numerous cases of sexual violence against civilians, including women and children, which were perpetrated by armed groups and some members of the Congolese national military. Some of the adjoining countries in the region have also experienced recent turmoil, which has led to flows of large numbers of refugees and The United Nations High internally displaced persons into the DRC.Commissioner for Refugees (UNHCR) estimated that as of mid-2013 there were close to 50,000 refugees from the Central African Republic, over 120,000 refugees from other countries, and around 2.6 million internally displaced persons living in camps or with host families in the DRC. Various industries, particularly manufacturing industries, use the four conflict minerals in a wide variety of products. For example, tin is used to solder metal pieces and is also found in food packaging, in steel coatings on automobile parts, and in some plastics. Most tantalum is used to manufacture tantalum capacitors, which enable energy storage in electronic products such as cell phones and computers, and to produce alloy additives, which can be found in turbines in jet engines. Tungsten is used in automobile manufacturing, drill bits and cutting tools, and other industrial manufacturing tools and is the primary component of filaments in light bulbs. Gold is used as reserves and in jewelry and is used by the electronics industry. As we reported in 2013, conflict minerals are mined in various locations around the world. For example, tin is predominantly mined in China, Indonesia, Peru, and Bolivia, as well as in the DRC, while tantalum is reportedly predominantly mined in areas such as Australia, Brazil, and Canada. Gold, however, is mined in many different countries, including the DRC. Congress has focused on issues related to the DRC for almost a decade. In 2006, Congress passed the Democratic Republic of Congo Relief, Security, and Democracy Promotion Act of 2006, stating that U.S policy is to engage with governments working for peace and security throughout the DRC and hold accountable any individuals, entities, and countries working to destabilize the country. In July 2010, Congress passed the Dodd-Frank Act, of which Section 1502 included several provisions concerning conflict minerals in the DRC and adjoining countries.directs State, USAID, SEC, and Commerce to take steps on matters related to the implementation of those provisions (see text box). Dodd-Frank Act Provisions Concerning Conflict Minerals in the Democratic Republic of the Congo and Adjoining Countries Section 1502(a) states that “it is the sense of the Congress that the exploitation and trade of conflict minerals originating in the Democratic Republic of the Congo is helping to finance conflict characterized by extreme levels of violence in the eastern Democratic Republic of the Congo, particularly sexual- and gender-based violence, and contributing to an emergency humanitarian situation therein, warranting the provisions of section 13(p) of the Securities Exchange Act of 1934, as added by subsection (b).” Section 1502(b) requires the Securities and Exchange Commission (SEC), in consultation with the Department of State (State), to promulgate disclosure and reporting regulations regarding the use of conflict minerals from the DRC and adjoining countries. Section 1502(c) requires State and the U.S. Agency for International Development (USAID) to develop, among other things, a strategy to address the linkages among human rights abuses, armed groups, the mining of conflict minerals, and commercial products. Section 1502(d) requires that the Department of Commerce report, among other things, a listing of all known conflict minerals- processing facilities (smelters and refiners) worldwide. As we have previously reported, SEC, State, USAID, and Commerce have each taken steps to address the provisions of the act. SEC issued its conflict minerals disclosure rule in August 2012 in response to Section 1502(b) of the Dodd-Frank Act, which required that SEC promulgate disclosure and reporting regulations regarding the use of conflict minerals from the DRC and adjoining countries by April 2011. In its summary of the rule, SEC noted that to accomplish the goal of helping to end the human rights abuses in the DRC caused by the conflict, Congress chose to use the securities laws disclosure requirements to bring greater public awareness of the source of companies’ conflict minerals and to promote the exercise of due diligence on conflict mineral supply chains. In the SEC adopting release, SEC noted that it understood Congress’s main purpose in doing so was to attempt to inhibit the ability of armed groups in the DRC and adjoining countries to fund their activities by exploiting the trade in conflict minerals. According to SEC, Congress’s objective was to promote peace and security, and reducing the use of such conflict minerals was intended to help reduce funding for the armed groups contributing to the conflict and thereby put pressure on such groups to end the conflict. SEC also indicated that one of the cosponsors of the provision noted that the provision would “enhance transparency” and “also help American consumers and investors make more informed decisions.” Companies are required to file a Specialized Disclosure report (Form SD) if they manufacture or contract to manufacture products that contain conflict minerals necessary to the functionality or production of the products and, as applicable, file a Conflict Minerals Report. The form provides general instructions to companies for filing the conflict minerals disclosure and specifies the information that their Conflict Minerals Reports must include. Companies were required to file under the rule for the first time by June 2, 2014, and annually thereafter on May 31. In 2011, State and USAID developed the U.S. Strategy to Address the Linkages between Human Rights Abuses, Armed Groups, Mining of Conflict Minerals and Commercial Products. The strategy includes five objectives: 1. Promoting an Appropriate Role for Security Forces. U.S. efforts under this objective aim to end the commercial role of the DRC security forces in the minerals trade and to make the security forces more effective within their appropriate, limited role in monitoring and securing trade. 2. Enhance Civilian Regulation of the DRC Minerals Trade. U.S. efforts under this objective will aim to increase the capacity of DRC civilian authorities involved in overseeing the minerals trade, particularly in the east. 3. Protect Artisanal Miners and Local Communities. U.S. efforts under this objective will aim to reduce the vulnerability of men and women in local communities directly and indirectly engaged in the mining sector. 4. Strengthen Regional and International Efforts. U.S. efforts under this objective aim to support the implementation and coordination of national, regional, and international efforts to promote monitoring, certification, and traceability—particularly the Great Lakes regional initiative—as well as the harmonization of due diligence guidance developed in various forums. 5. Promote Due Diligence and Responsible Trade through Public Outreach. U.S. efforts will aim, through public outreach, to encourage all stakeholders to take steps at the local, regional, and international level to promote the responsible trade in minerals. Following our June 2014 report, Commerce issued a list of all known conflict minerals-processing facilities worldwide in September 2014. We reported in June 2014 that Commerce had not yet fulfilled its mandate under the act to report, among other things, a list of all known conflict minerals-processing facilities worldwide to appropriate congressional committees. We also recommended that the Secretary of Commerce provide to Congress a plan that outlines the steps, with associated time frames, to develop and report the required information about smelters and refiners of conflict minerals worldwide. As of July 2015, GAO was reviewing Commerce’s related actions to assess its progress toward implementing the recommendation. Companies filed for the first time in response to the SEC rule in 2014 on conflict minerals used in calendar year 2013. As we previously reported, SEC adopted the final conflict minerals disclosure rule on August 22, 2012. As adopted, the final rule applies to any company that files reports with SEC under Section 13(a) or Section 15(d) of the Securities Exchange Act of 1934 and uses conflict minerals that are necessary to the functionality or production of a product manufactured or contracted by that company to be manufactured. The SEC conflict minerals disclosure rule details a process for companies to follow, as applicable, to comply with the rule. Broadly, the process falls into three steps that require a company to (1) determine whether the rule applies to it; (2) conduct a reasonable country of origin inquiry concerning the origin of conflict minerals used; and (3) exercise due diligence, if appropriate, to determine the source and chain of custody of conflict minerals used. Figure 2 depicts SEC’s flowchart summary of the conflict minerals disclosure rule. Step 1: Determine Applicability of the Conflict Minerals Disclosure Rule A company is subject to the rule if, as step 1 of figure 2 indicates, the company files reports with SEC under Section 13(a) or 15(d) of the Exchange Act and conflict minerals are necessary to the functionality or production of a product manufactured or contracted by that company to be manufactured. If a company does not meet this definition, it is not required, under the conflict minerals rule, to take any action, make any disclosures, or submit any reports. If, however, a company meets this definition, the company moves to step 2 of figure 2 (discussed later) and must file a Form SD. The number of companies that filed Form SDs in 2014—1,321—was substantially lower than SEC’s estimate of 6,000 companies that could possibly be affected by the rule. In its rule proposal, SEC had estimated that approximately 6,000 companies could possibly be affected by the rule by estimating the number and types of businesses that SEC staff believed may manufacture or contract to manufacture products with conflict minerals necessary to the functionality or production of those products. According to an SEC official, this estimate was intentionally overly inclusive, was not an expectation, and was provided to satisfy the requirements of the Paperwork Reduction Act. Our analysis of a sample of 2014 Form SD filings indicated that an estimated 87 percent of the companies that filed were domestic, while an estimated 13 percent were foreign. Also, while not all of the companies in our sample identified which conflict minerals they used in calendar year 2013, as there was no requirement in the rule to do so, of those companies that did, about 58 percent reported using tin, 43 percent reported using tantalum, 39 percent reported using tungsten, and 44 percent reported using gold (see fig. 3). The sample of filings in 2014 that we reviewed indicates that 99 percent of companies conducted a country-of-origin inquiry and most companies reported that they were unable to determine the country of origin of conflict minerals they had used in 2013. Company representatives we interviewed cited difficulties in obtaining information from suppliers. If a company determines that it is subject to the SEC conflict minerals disclosure rule, the company is required to conduct a reasonable country of origin inquiry regarding the origin of conflict minerals it used and disclose its determination in a Form SD (illustrated by step 2.1 of fig. 2). The rule does not prescribe the specific actions that are required for an RCOI, noting that it will depend on each company’s facts and circumstances. However, the rule provides general standards: A company must conduct an inquiry regarding the origin of conflict minerals it used that is reasonably designed to determine whether any of those conflict minerals originated in the Covered Countries or are from recycled or scrap sources, and must conduct the inquiry in good faith.the rule recognizes that a company, after conducting an RCOI, may not know whether conflict minerals it used originated from a Covered Country. For example, the rule explains that step 3 can be triggered by the determination that the company has a reason to believe that its necessary conflict minerals may have originated in the Covered Countries and may not have come from recycled or scrap sources. According to our analysis of all companies in our sample that filed a Form SD in 2014, an estimated 67 percent reported that they were unable to determine the country of 4 percent reported that conflict minerals came from Covered 24 percent reported that conflict minerals did not originate in Covered 2 percent reported that conflict minerals came from scrap or recycled 3 percent did not provide a clear determination. In our analysis of a sample of filings, an estimated 99 percent of companies that filed a Form SD reported conducting an RCOI. Almost all (96 percent) of the companies reported that they conducted a survey of their suppliers to try to obtain information about whether they used conflict minerals, the country of origin of those conflict minerals, and the processor of the conflict minerals. We did not systematically analyze how companies conducted their supplier surveys, but a few of the Form SDs that we reviewed and some company representatives we spoke with indicated that they used a supplier survey and industry template to conduct their RCOIs. For example, one company reported that its method for determining the country of origin of its minerals was to conduct a supply chain survey with direct suppliers using a template developed by an industry association. Another company reported contacting suppliers and using an industry survey template–the Conflict Minerals Reporting Template–from the Conflict-Free Sourcing Initiative, an industry association. In our analysis, an estimated 47 percent of companies reported that they received responses from the suppliers they surveyed. Nineteen companies in our sample reported that they had 100 percent response rates from their suppliers, but 12 of them were unable to determine the source of the conflict minerals. Four reported that they were able to determine that conflict minerals they used did not come from Covered Countries, while1 was able to determine that conflict minerals it used came from Covered Countries. Representatives of some companies that we spoke with told us that they received information from suppliers that was incomplete, limiting their ability to determine the source and chain of custody of the conflict minerals they used in 2013. As we reported in July 2013, a company’s supply chain can involve multiple tiers of suppliers. As a result, a request for information from a company could go through many suppliers, as figure 4 illustrates, delaying the communication of information to the company. For example, as we previously reported, companies required to report under the rule could Those suppliers could submit the inquiries to their first-tier suppliers.either provide the reporting company with sufficient information or initiate the inquiry process up the supply chain, such as by distributing the inquiries to suppliers at the next tier—tier 2 suppliers. The tier 2 suppliers could inquire up the supply chain to additional suppliers, until the inquiries arrived at the smelter. Smelters could then provide the suppliers with information about the origin of the conflict minerals. Figure 4 illustrates the flow of information through the supply chain. Representatives of some companies that we spoke with told us that they were making efforts to address concerns about the lack of information on the country of origin of conflict minerals they had used. For example, one representative told us that in the future the company plans to include in all new and renewing contracts a conflict minerals clause that will require its suppliers to source only conflict-free minerals, leveraging continuing business negotiations with its suppliers and adding downward pressure for suppliers to source responsibly from the region. Another company’s representative told us that the company would alter its outreach strategies to contact suppliers sooner and more frequently during the reporting process. The rule requires a company that determines that, based on its RCOI, conflict minerals it used (1) did not originate in the Covered Countries or (2) came from recycled or scrap sources, to disclose in its Form SD its determination and briefly describe the RCOI it used in reaching the determination and the results of the inquiry (illustrated in step 2.3 of fig. 2). As indicated above, an estimated 26 percent of all companies that filed a Form SD reported in these two categories. While we did not individually analyze each company’s description of method of RCOI in this group, as mentioned earlier, an estimated 96 percent of the companies in our sample that conducted an RCOI reported using supplier surveys. Step 3: Exercise Due Diligence on the Source and Chain of Custody of Conflict Minerals Using a Nationally or Internationally Recognized Framework, if Available. According to our analysis, the exercise of due diligence on the source and chain of custody of conflict minerals yielded little or no additional information, beyond the RCOI, regarding the country of origin of conflict minerals or whether the conflict minerals used in 2013 in products by companies benefited or financed armed groups in the Covered Countries. According to the SEC rule, based on a company’s RCOI, if a company knows that conflict minerals it used originated in the Covered Countries or has reason to believe that they may have originated in the Covered Countries and may not have come from recycled or scrap sources, the next step is to exercise due diligence using a nationally or internationally recognized due diligence framework, if such a framework is available for the necessary conflict minerals (step 3.1). Majority of companies exercised due diligence and most maintained the determination that they could not identify origin of conflict minerals. According to our analysis, about 92 percent of the companies mentioned the OECD framework in connection with their due diligence, while another 2 percent mentioned a framework other than the OECD framework. Notwithstanding, the results remained as indicated above in the discussion of RCOI. That is, an estimated 67 percent of all companies declared that they were unable to determine the country of origin of the conflict minerals in their products. Companies unable to determine if conflict minerals benefited or financed armed groups in Covered Countries. As we indicated in the discussion of RCOI, an estimated 4 percent of the companies determined that the necessary conflict minerals used in their products originated from Covered Countries. However, according to our analysis, all of these companies disclosed that they conducted due diligence on the source and chain of custody of conflict minerals they used but none were able to determine whether such conflict minerals financed or benefitted armed groups during the reporting period (step 3.3 of fig. 2). SEC rule provides a temporary period during which companies can describe their products as “DRC conflict undeterminable.” The SEC rule allows a temporary period during which, if after exercising due diligence for source and chain of custody of conflict minerals used in their products, companies remain unable to determine the origin of conflict minerals used and whether those minerals financed or benefitted armed groups, those companies can describe their products as DRC conflict undeterminable (step 3.5 of fig. 2) in their Conflict Minerals Report (CMR). The temporary period is in place for 2 years for all companies and 4 years for smaller reporting companies following the effective date of the rule. However, due to continuing litigation in a legal challenge to the conflict minerals rule, SEC staff has issued guidance stating, among other things, that companies are not required to use this label. See appendix II for additional information on this issue. Figure 5 depicts the SEC conflict minerals disclosure rule timeline. As we previously reported, SEC staff had indicated that they anticipated that most companies during the first year of filing would likely claim “DRC conflict undeterminable” in their disclosures. According to SEC staff, of the 1,321 companies that filed distinct Form SDs in 2014, 89 percent were larger companies, while smaller companies made up 11 percent of all companies that filed a Form SD. According to SEC, the temporary period recognizes that company processes for tracing conflict minerals through the supply chain must develop further. After the temporary period described above, if in exercising due diligence the company was not able to determine whether the conflict minerals came from Covered Countries or financed or benefited armed groups in those countries, it must include in its CMR a description of those products as not having been found to be DRC conflict free. Majority of companies should have filed conflict minerals report as an exhibit to their Form SDs. The rule requires a company that exercised due diligence on the source and chain of custody of conflict minerals it used to file a Conflict Minerals Report as an exhibit to its Form SD (step 3.3), when appropriate. According to our analysis, at least an estimated 71 percent of companies should have filed a CMR as an exhibit to their Form SDs. The CMR must also include an Independent Private Sector Audit (IPSA) report. According to the SEC disclosure rule, the audit’s objective is for the auditor to express an opinion or conclusion as to whether the design of the company’s due diligence measures as set forth in the CMR, with respect to the period covered by the report, is in conformity with, in all material respects, the criteria set forth in the nationally or internationally recognized due diligence framework used by the company, and whether the company’s description of the due diligence measures it performed as set forth in the CMR, with respect to the period covered by the report, is consistent with the due diligence process that the company undertook. Under the rule, for products that have not been found to be DRC Conflict Free, the companies are required to disclose, for those products, the facilities used to produce the conflict minerals, the country of origin of the minerals, and the efforts to determine the mine or location of origin. Companies that disclosed that conflict minerals came from covered countries indicated they are or will be taking action. Examples include the following: notify suppliers that the company intends to cease doing business with suppliers who continue to source conflict minerals from smelters that are not certified as conflict-free; include a conflict minerals clause in new or renewed contracts requiring suppliers to provide conflict minerals information on a prospective basis and identify alternative sources of conflict minerals if suppliers are found to be providing the company with minerals that support conflict in the Covered Countries; increase the number of surveyed suppliers; reach out earlier in the year, and direct suppliers to information and training resources; participate in the Conflict-Free Sourcing Initiative, an industry association effort, to define best practices and induce smelters and refiners to adopt socially responsible business practices; and address, as appropriate, complaints or concerns expressed through grievance mechanisms. State and USAID officials reported that they are implementing the U.S. conflict minerals strategy (the strategy) they submitted to Congress in 2011 through specific actions that address the five key objectives of the strategy. Both State and USAID officials in Washington and the region reiterated that the strategy and its five key objectives remain relevant although years have passed since the strategy was developed. In November 2014, we requested a consolidated list of the actions the agencies are taking under the strategy’s objectives. The information we received included actions by each agency, or its implementing partners, and status or results, where applicable, as shown in tables 1 through 5.State and USAID reported that some activities are associated with multiple objectives. As we previously reported, some members of the security forces in the DRC, such as some members of the Congolese national military units, are consistently and directly involved in human rights abuses against the civilian population in eastern DRC and are involved in the exploitation of conflict minerals and other trades. Some of the reported actions being undertaken by the International Organization for Migration (IOM), a USAID implementing partner, are helping to lessen the involvement of the military and increasing the role of legitimate DRC government stakeholders in mining areas (see table 1). For example, USAID reported that IOM has assisted with the planning and demilitarization of mine sites in eastern DRC through leading a multi-sector stakeholder process of mine validation to ensure that armed groups and criminal elements of the Congolese military are not active in eastern DRC mines. Official Congolese agencies tasked with regulating the minerals trade have responsibilities that include collecting production and export figures. However, as we reported in 2010, U.S. and DRC officials, a foreign official, and industry representatives told us that their ability to carry out their duties is reportedly impeded by various factors such as weak capacity, volatile security, and poor infrastructure, among other things. USAID reported that it is undertaking a number of actions, through implementing partners, to enhance civilian regulation and traceability of the DRC minerals trade. Traceability mechanisms may minimize the risk that minerals that have been exploited by illegal armed groups will enter the supply chain and may also support companies’ efforts to identify the source of the conflict minerals across the supply chain around the world. Such initiatives in the Democratic Republic of the Congo and adjoining countries focus on tracing minerals from the mine to the mineral smelter or refiner by supporting a bagging and tagging program or some type of traceability scheme. For example, USAID reported funding TetraTech, a technical services company, to (1) build the capacity for responsible minerals trade in the DRC, (2) strengthen the capacity of key actors in the conflict minerals supply chain, and (3) advance artisanal and mining rights. In addition, USAID indicated that it is funding IOM to support DRC infrastructure and regulatory reform. According to an IOM official we spoke with in the region, IOM also provides the DRC government with information on which mines should be suspended from the conflict-free supply chain based on safety and human rights violations. A USAID official and representatives of local human rights organizations we met with during our visit to North Kivu also told us that the implementation of traceability schemes is contributing to positive outcomes. For example, in some cases, according to USAID, local miners earn double the price for certified conflict-free minerals compared to non- certified illegal minerals, which is more than they would earn from smuggling. Table 2 shows actions USAID has taken to enhance civilian regulation of the DRC minerals trade. According to USAID, artisanal mining provides survival incomes to Congolese throughout the country but it is particularly significant in eastern DRC, where roughly 500,000 people directly depend on artisanal mining for their income. These miners work under very difficult safety, health, and security conditions and almost always within an illegal and illicit environment. Moreover, as we observed during our visits to the mines in the region, artisanal mining is a physically demanding activity requiring the use of rudimentary techniques and little or no industrial capacity (see figs. 8 and 9 for illustrations of artisanal miners at work). State and USAID reported several programs (shown in table 3), through their implementing partners, aimed at protecting artisanal miners and local communities and providing alternative livelihoods. For example, State reported that it funded an implementing partner, Heartland Alliance International (HAI), a service-based human rights organization, for anti- human-trafficking initiatives as well as to promote alternative livelihoods and improve workers’ rights in the artisanal mining sector. According to State, these efforts aimed to reduce the vulnerability of men and women in local communities. State officials reported some illustrative examples of success in promoting alternative livelihoods. For example, a woman who used to transport minerals, a physically demanding, low-paying job, attended one of HAI’s alternative livelihood trainings where she received a kit to sell fish. Today, she makes a better living from selling fish and is able to pay her children’s school fees without working in the mining sector, according to State officials. In addition, USAID has funded Pact, an implementing partner, to promote community conflict-mitigation and conflict minerals monitoring structures at local levels. State also supported Pact to build local capacity for monitoring security and human rights conditions and mineral traceability, and provide local artisanal mining communities with resources to monitor, record, and report on initiatives and human rights abuses. State indicated that its actions through Pact also support enhancing civilian regulation of the minerals sector and strengthening regional and international cooperation, other objectives of the strategy. In our July 2012 report, we provided a description of regional and global initiatives being undertaken by various stakeholders that may facilitate responsible sourcing of conflict minerals in the DRC region. These included, among others, efforts by the UN and International Conference on the Great Lakes Region (ICLGR). Objective 4 of the U.S. conflict minerals strategy calls for actions to strengthen regional and international efforts. USAID reported that it is working with TetraTech to achieve this goal. Specifically, USAID said it is working with TetraTech to build the capacity of the ICGLR, an intergovernmental organization. According to USAID, this effort supports the implementation and coordination of regional countries’ efforts to promote monitoring, certification, and traceability of mine sites. A TetraTech representative we met with in the region told us that TetraTech is also organizing workshops for educating and raising awareness about regional certification in ICGLR countries. According to officials we interviewed from the United Nations Organization Stabilization Mission in the Democratic Republic of the Congo (MONUSCO), the ICGLR, and local officials, U.S. diplomacy has increased awareness and improved coordination about conflict minerals in the region; a situation described by some of the officials as the most effective State and USAID actions for conflict minerals in the region. State and USAID reported engaging in various efforts to reach out to industry associations, NGOs, international organizations, and regional entities to help promote due diligence and responsible trade in conflict minerals. For example, State and USAID reported that they leveraged private sector interest to establish the Public-Private Alliance for Responsible Minerals Trade (PPA) to support supply chain solutions to conflict minerals challenges in the region. The alliance includes State, USAID, and representatives from U.S. end-user companies, industry associations, NGOs, and ICGLR, among others. In addition, State is engaged with the Conflict-Free Sourcing Initiative (CFSI) and State and USAID both participate in the biannual OECD, UN Group of Experts (UNGOE), ICGLR forums. According to State and USAID officials, these efforts promote continued engagement with industry officials and civil society groups and encourage due diligence and strengthening of conflict- free supply chains. At a conference in Kinshasa, DRC, co-hosted by the OECD, UN Group of Experts, and the ICGLR in November 2014, we observed State and USAID officials outline their actions, outcomes, and next steps to conference participants. A USAID official in the region told us that teams of private sector executives, which State and USAID officials in the DRC and Rwanda helped to organize and host, have visited eastern DRC and Rwanda mining sites on several occasions, reinforcing the executives’ commitment to source minerals responsibly. According to the USAID official, these efforts have resulted in a reduction in predatory taxes, contributions by exporters to social development, and increased focus on certification and traceability systems. Noting that visits to the DRC and some locations in Rwanda had been coordinated and led by State and USAID staff, a State official added that some private companies had made independent contributions to community projects and other companies had been active in providing feedback on certification and traceability mechanisms. Although State and USAID officials have provided some examples of results associated with their actions, the agencies face difficult operating conditions that complicate efforts to address the connection between human rights abuses, armed groups, and the mining of conflict minerals. We have described some of these challenges in our previous reports but, as we observed during our recent visit to the region, numerous challenges continue to exist. First, the mining areas in eastern DRC continue to be plagued by insecurity because of the presence and activities of illegal armed groups and some corrupt members of the national military. In 2010, we reported extensively on the presence of illegal armed groups, such as the Democratic Forces for the Liberation of Rwanda or Forces Democratiques de Liberation du Ruwanda (FDLR), and some members of the Congolese military and the various ways in which they were involved in the exploitation of the conflict minerals sector in eastern DRC. In 2013, the Peace and Security Cooperation Framework signed by 11 regional countries noted that eastern DRC has continued to suffer from recurring cycles of conflict and persistent violence. Although U.S. agency and Congolese officials informed us during our recent field-work in the region that a large number of mines had become free of armed groups (referred to as green mines), MONUSCO officials we met with in the DRC also told us that armed groups and some members of the Congolese military were still active in other mining areas. Specifically, MONUSCO officials described two fundamental ways in which armed groups continued to be involved in conflict minerals activities: directly, by threatening and perpetrating violence against miners to confiscate minerals from them; and indirectly, by setting up checkpoints on trade routes to illegally tax miners and traders. As we noted in our 2010 report, U.S. agency and UN officials and others believe that the minerals trade in the DRC cannot be effectively monitored, regulated, or controlled as long as armed groups and some members of the Congolese national military continue to commit human rights violations and exploit the local population at will. As we reported in 2010, U.S. government officials and others have indicated that weak governance and lack of state authority in eastern DRC constitute a significant challenge. As we noted then, according to U.N. officials, if Congolese military units are withdrawn from mine sites, civilian DRC officials will need to monitor, regulate, and control the minerals trade. We also noted that effective oversight of the minerals sector would not occur if civilian officials in eastern DRC continued to be under paid or not paid at all, as such conditions easily lead to corruption and lack of necessary skills to perform their duties. Evidence shows that this situation has not changed much. U.S. agencies and an implementing partner, as well as some Congolese officials, told us that there are insufficiently trained civilians to effectively monitor and take control of the mining sector. ICGLR officials we met with highlighted the importance of a regional approach to addressing conflict minerals and indicated that governments’ capacity for and interest in participating in regional certification schemes varies substantially, making it difficult to implement credible, common standards. Corruption continues to be a challenge in the mining sector. For example, a member of the UN Group of Experts told us that smuggling remains prolific and that instances of fraud call into question the integrity of traceability mechanisms. This official stated that tags used to certify minerals as conflict free are easily obtained and sometimes sold illegally in the black market. According to USAID, USAID is working to introduce a pilot traceability system to increase transparency, accountability, and competition in the legal artisanal mining sector. According to U.S. government officials and officials from local and civil society in the region that we met with, lack of state authority bolsters armed group activity and precludes public trust in the government. Poor infrastructure, including poorly maintained or nonexistent roads, makes it difficult for mining police and other authorities to travel in the region and monitor mines for illegal armed group activity. In our 2010 report, we reported that the minerals trade cannot be effectively monitored, regulated, and controlled unless civilian DRC officials, representatives from international organizations, and others can readily access mining sites to check on the enforcement of laws and regulations and to ensure visibility and transparency at the sites. During our recent visit to the region, poor road conditions made travel to the mines very challenging (see fig. 10). In addition, U.S. agencies cited the overall lack of an investor-friendly environment, including poor investment climate, arbitrary and excessive taxation and predatory government monitoring and enforcement, and the scarcity of basic services, such as water and electricity, as challenges that make it difficult for mining companies to conduct business, a fundamental issue that precludes economic development and makes it more difficult for U.S. agencies and contractors to conduct oversight and provide services. Also, a mine owner in eastern DRC that we met with cited a range of challenges to conducting business in the region, including lack of access to financing, poor security, and inadequate infrastructure. Since we last reported, in June 2014, results from three new population- based surveys related to sexual violence in eastern DRC have been published, one of which provides a basis for comparison with results of an earlier survey of sexual violence in the DRC. The Dodd-Frank Act mandated GAO to report annually, beginning in 2011, on the rate of sexual violence in war-torn areas of the DRC and adjoining countries. No new population-based surveys related to sexual violence in Uganda, Rwanda, or Burundi have been published since our last report, but some surveys in these countries are underway or being planned. Some new additional case file data on sexual violence are available for some of these countries. However, as we reported in 2011, case file data on sexual violence are not suitable for estimating an overall rate of sexual violence. We identified three new population-based surveys related to sexual violence in eastern DRC that have been published since June 2014: (1) a Demographic and Health Survey (DHS) conducted by the DRC Ministry of Planning with technical assistance from ICF International, (2) a USAID- funded survey conducted by a U.S.-based monitoring and evaluation firm (Social Impact) and (3) a survey co-produced by the Harvard Humanitarian Initiative, a university-based research center, and the United Nations Development Program (UNDP). For the purposes of this report, we define eastern DRC as encompassing South Kivu, North Kivu, and the Ituri District of Orientale Province. Population-based surveys for Burundi and Rwanda are underway or planned by ICF International, which does not currently have plans to conduct another DHS for Uganda. We previously reported that population-based surveys are more appropriate for estimating the rate of sexual violence than case file data because population based surveys are conducted using the techniques of However, there random sampling and their results are generalizable.are limitations and challenges to using such surveys to gather data on sexual violence and estimate the rate of such violence, particularly in eastern DRC. Examples of limitations include undercoverage caused by poor infrastructure and insecurity, which can limit access to some areas; and underreporting, as survey response rates partly depend on whether or not sexual violence victims are willing to discuss such difficult experiences. In addition, if large sample sizes are required, the result can be higher survey costs. Ministry of Planning and Implementation of the Modern Revolution (MPSMRM), Ministry of Public Health (MSP), and ICF International. Democratic Republic of Congo Demographic and Health Survey 2013-14. Rockville, MD, 2014. study was designed to provide data for monitoring the population and health situation in the DRC, using indicators such as fertility, sexual activity, and family planning, among other things. The DRC Ministry of Planning conducted the survey with the support of the DRC Ministry of Public Health as well as several foreign governments; international, and nongovernmental organizations, including USAID; various UN agencies; and the World Bank. The 2014 DHS survey of the DRC is the second such survey conducted by the DRC’s Ministry of Planning that has yielded nationwide information on sexual violence in the DRC, so it provides a basis for comparison over time. The first DHS survey was published in August 2008, and was conducted from January 2007 to August 2007. According to ICF International’s analysis of the 2007 survey data, 28 percent of women nationwide, ages 18-49, reported having experienced sexual violence in the 12-month period preceding the survey, while 36 percent of women nationwide reported they had experienced sexual violence at some point in their lifetime. According to an analysis by ICF International, which compared estimates from the 2007 and 2013-2014 survey data, nationwide estimates of sexual violence decreased from 2007 to 2013-2014 for both the 12-month period preceding the surveys (28 percent to 17 percent) and the lifetime figures (36 percent to 29 percent). ICF International determined these differences to be statistically significant. For North Kivu, the rate of sexual violence reported by women in the 1-year period preceding the 2007 and 2013-2014 surveys decreased from 30 percent in 2007 to 14 percent in 2013-2014, which is also statistically significant, according to ICF International. For South Kivu, ICF International found no statistically significant difference in the rate of sexual violence reported by women in the 12-month period preceding the survey and at any point in their lives between the 2007 and 2013-2014 surveys. See table 6 for a summary of the comparisons. A population-based survey, funded by USAID and conducted by the U.S.- based organization Social Impact, on human-trafficked individuals, ages 15 and older, in artisanal mining towns in South Kivu and North Katanga, found that 7.1 percent of women and 1.2 percent of men had experienced sexual violence in the previous year at the mining sites. The survey, published in August 2014, and covering data collected from April 2014 to May 2014, also found that the most common perpetrators of this violence were friends or acquaintances (identified as responsible for about one- half of the attacks) and miners (identified as responsible for about one- fifth of the attacks). The survey is intended to measure sexual violence in areas that have artisanal mines and included a sample of territories within eastern DRC and outside of eastern DRC. Because of this, survey results may not be generalizable to the population of eastern DRC. A 2014 survey, co-produced by the Harvard Humanitarian Initiative and UNDP, of adult men and women, ages 18 and older, in the eastern DRC provinces of North and South Kivu and the Ituri District, found that 23 percent of respondents had witnessed sexual violence being committed by armed groups on civilians since 2002. The survey also found that a total of 9 percent of respondents had witnessed sexual violence being committed by armed groups on civilians over the 12-month period prior to the survey. The survey was conducted between November 2013 and December 2013. According to the survey report, the survey was conducted to assess the population of eastern Congo’s perceptions, knowledge, and attitudes about peace, security, and justice, and aimed at providing results that were representative of the adult population of territories and major urban areas in eastern Congo. The interviewers asked respondents if they had witnessed sexual violence being perpetrated by armed groups on civilians, but did not explicitly ask whether the respondents had ever experienced sexual violence; therefore we found that the data cannot be used to estimate the rate of sexual violence in eastern DRC. Although no new population-based surveys related to sexual violence in Uganda, Rwanda, and Burundi have been published since July 2014, population-based surveys are underway or planned by ICF International in Rwanda and Burundi. (Fig. 11 shows a timeline of population-based surveys for the DRC, Rwanda, Uganda, and Burundi since 2007). According to ICF International, data collection for a DHS for Rwanda is complete and it expects to publish the survey results by the end of 2015. ICF International said it has planned a DHS for Burundi in late 2015 and expects the report to be available in late 2016. ICF International said that discussions are underway to conduct another DHS for Uganda in 2016 but it may or may not include questions related to sexual violence. Since GAO’s June 2014 report, State and some UN agencies have provided additional case file data on instances of sexual violence in the DRC and adjoining countries. State’s annual country reports on human rights practices provided information pertaining to sexual violence in the following countries: DRC. The state security force, rebel and militia groups, and civilians perpetrated widespread sexual violence. The United Nations registered 3,635 victims of sexual violence from January 2010 to December 2013. These crimes were often committed during attacks on villages and sometimes as a tactic of war to punish civilians for perceived allegiances with rival parties or groups. The crimes occurred largely in the conflict zones in North Kivu province but also in provinces throughout the country. Burundi. Centre Seruka, a clinic for rape victims, reported an average of 135 new rape cases per month from January through September. Of that number, 68 percent were minors, and 17 percent were children under age five. Centre Seruka also reported approximately 30 percent of its clients filed complaints, and 70 percent knew their aggressors. Rwanda. Domestic violence against women was common. Although many incidents remained within the extended family and were not reported or prosecuted, government officials encouraged the reporting of domestic violence, and the Rwanda National Police stated that reporting of such cases increased. Uganda. Rape remained a serious problem throughout the country, and the government did not consistently enforce the law. Although the government arrested, prosecuted and convicted persons for rape, the crime was seriously underreported, and authorities did not investigate most cases. Police lacked the criminal forensic capacity to collect evidence, which hampered prosecution and conviction. The 2013 police crime report registered 1,042 rape cases throughout the country, of which 365 were tried. Of these, 11 convictions were secured, with sentences ranging from 3 years to life imprisonment, 11 cases were dismissed; and 343 cases were still pending in court at year’s end. In addition, some UN entities reported case file information on sexual violence in the DRC and Burundi, as described below: DRC. A March 2015 report of the Secretary-General on conflict- related sexual violence showed that, from January 2014 to September 2014, the United Nations Population Fund (UNFPA) recorded 11,769 cases of sexual violence in the provinces of North Kivu, South Kivu, Orientale, Katanga, and Maniema. Of these cases, 39 percent were considered to be directly related to the dynamics of conflict, being perpetrated by arms bearers. The report also notes that, as in 2013, North Kivu and Orientale remain the provinces most affected by conflict-related sexual violence, with 42 percent of all incidents taking place in Orientale. DRC. MONUSCO reported in September 2014 that armed groups and national security forces continued to commit crimes of sexual violence. Between June 30, 2014, and September 25, 2014, it recorded 37 cases of sexual violence committed by armed groups and national security forces, 15 of which were committed by the armed forces of the Democratic Republic of Congo and 10 by Mayi-Mayi combatants from different groups. The report indicates that 18 of the 37 cases of sexual violence occurred in North Kivu, South Kivu, and Orientale Provinces. DRC. MONUSCO also recorded 61 cases of sexual violence in conflict during the reporting period of September 25, 2014 to December 30, 2014. MONUSCO reports that at least 30 women and 31 children were victims of sexual violence, allegedly committed by armed groups and national security forces in eastern DRC. Burundi. UNFPA reports that it supported 3,203 survivors of sexual violence at care centers located in the areas of Seruka, Nturengaho, and Humara. In addition, 24 UNFPA-supported hospitals across six provinces in Burundi are also collecting sexual violence data. UNFPA reports that these hospitals provided medical support to 180 survivors in 2014. As we previously reported, several factors make case file data unsuitable for estimating rates of sexual violence. First, because case file data are not aggregated across various sources, and because the extent to which various reports overlap is unclear, it is difficult to obtain complete data, or a sense of magnitude from case files. Second, in case file data as well as in surveys, time frames, locales, and definitions of sexual violence may be inconsistent across data collection operations. Third, case file data are not based on a random sample and the results of analyzing these data are not generalizable. We provided a draft of this report to SEC, State, USAID, and Commerce for their review. Agencies provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to appropriate congressional committees. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To examine company disclosures filed with the Securities and Exchange Commission (SEC) for the first time in 2014 in response to the SEC conflict minerals disclosure rule, we downloaded the Specialized Disclosure reports (Form SD) and Conflict Minerals Reports from SEC’s publically available Electronic Data Gathering, Analysis, and Retrieval (EDGAR) database on July 31, 2014. We downloaded 1,324 filings identified as Form SDs in EDGAR. To review the completeness and accuracy of the EDGAR database, we reviewed relevant documentation, interviewed knowledgeable SEC and GAO officials, and reviewed prior GAO reports on internal controls related to SEC’s financial systems. We determined that the EDGAR database was sufficiently reliable for identifying the universe of SD filings on July 31, 2014. We reviewed the conflict minerals section of the Dodd-Frank Wall Street Reform and Consumer Protection Act and the requirements of the SEC conflict minerals disclosure rule to develop a questionnaire that guided our data collection and analysis of Form SDs and Conflict Minerals Reports. Our questionnaire was not a compliance review of the Form SDs and Conflict Minerals Reports. The questions were written in both yes/no and multiple choice formats. An analyst reviewed the Form SDs and Conflict Minerals Reports and recorded responses to the questionnaire for all of the companies in the sample. A second analyst also reviewed the Form SDs and Conflict Minerals Reports and verified the questionnaire responses recorded by the first analyst. Analysts met to discuss and resolve any discrepancies. We randomly sampled 147 filings from a population of 1,324 to create estimates generalizable to the population of all companies that filed. All estimates based on our sample have a margin of error of plus or minus 10 percentages points or less at the 95 percent confidence level. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. We also attended an industry conference on conflict minerals and spoke with company representatives to obtain additional perspectives. To examine Department of State (State) and U.S. Agency for International Development (USAID) actions related to the U.S. conflict minerals strategy in the DRC region, we reviewed the U.S. Strategy to Address the Linkages between Human Rights Abuses, Armed Groups, Mining of Conflict Minerals and Commercial Products, developed by State and USAID in 2011, and State’s and USAID’s websites. We interviewed State and USAID officials in Washington for an update on the U.S. implementation of the strategy in the Democratic Republic of Congo (DRC) and continuing challenges. We also reviewed the Dodd-Frank Wall Street Reform and Consumer Protection Act. In November 2014 we traveled to the DRC region and met with several State and USAID officials implementing actions related to the strategy and host country officials. We also met with representatives of nongovernmental organizations (NGO), contractors, international organizations, and private sector representatives to gather information and assess the impact of the Dodd-Frank Act and the implementation of the conflict minerals strategy. In addition, we visited three conflict minerals sites—a tantalum mine in the DRC, a tin mine in Rwanda, and a gold mine in Burundi—to observe operations and artisanal mining activities and to gain an understanding of mine certification processes and the challenges that mines must overcome to export minerals. We also reviewed documents from officials working in the region that detailed the various programs State and USAID are implementing. In response to a mandate in the Dodd-Frank Wall Street Reform and Consumer Protection Act that GAO submit an annual report that assesses the rate of sexual violence in war-torn areas of the DRC and adjoining countries, we identified and assessed any additional published information available on sexual violence in war-torn eastern DRC, as well as three adjoining countries that border eastern DRC—Rwanda, Uganda, and Burundi—since our June 2014 report on sexual violence in these areas. During the course of our review, we interviewed officials from USAID to discuss the collection of sexual violence-related data—including population-based surveys and case file data—in the DRC and adjoining countries. We contacted researchers and representatives from groups we interviewed from our prior review on sexual violence rates in eastern DRC and adjoining countries. We also traveled to New York City to meet with officials from the United Nations (UN) Population Fund, United Nations High Commissioner for Refugees, United Nations Special Representative of the Secretary-General on Sexual Violence in Conflict, and the United Nations Children’s Fund. In addition, we reviewed relevant documentation, such as reports and technical briefs, from various UN entities. To determine whether sexual violence data from the last two Demographic Health Survey (DHS) published reports for the DRC were comparable, we corresponded with and interviewed officials at ICF International, a firm providing technical assistance for survey design and implementation. Because data from the published 2008 and 2014 DHS for DRC were not comparable, we reported on data that ICF International generated at our request for the two time periods, which it determined to be comparable. We also conducted Internet literature searches to identify new academic articles containing any additional information on sexual violence since our 2014 report. We conducted this performance audit from September 2014 to August 2015 in accordance with generally accepted government auditing standards. Those standards require we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In response to the appeals court’s April 2014 decision, the Securities and Exchange Commission (SEC) staff, on April 29, 2014, issued a statement that it expects companies to file any reports required under Rule 13p-1 subject to any further action that may be taken by either the commission or a court. The SEC staff’s statement contains guidance to companies, which provides, among other things, that no company is required to describe its products as having “not been found to be ‘DRC [Democratic Republic of the Congo] conflict free’,” or as “DRC conflict undeterminable” in their reports. The guidance also states that, although the rule does not require any company to describe its products as “DRC conflict free,” a company may voluntarily elect to describe any of its applicable products that way in its report if it had obtained an independent private sector audit as required by the rule. In addition, the guidance states that, pending further action, an independent private sector audit will not be required unless a company voluntarily elects to describe a product as DRC conflict free in its Conflict Minerals Report. On May 2, 2014, SEC issued an order staying the effective date for compliance with the portions of Rule 13p-1 and Form SD subject to the appeals court’s First Amendment holding pending the completion of judicial review. On May 5, 2014, the plaintiffs filed a motion with the appeals court asking the court to stay the entire rule pending the completion of judicial review, which the commission opposed, and on May 14, 2014, the appeals court denied the motion. In addition to the individual named above, Godwin Agbara (Assistant Director), Marc Castellano, Tina Cheng, Debbie Chung, Justin Fisher, Julia Jebo Grant, Emily Gupta, Stephanie Heiken, Jill Lacey, Grace Lui, and Andrea Riba Miller made key contributions to this report. | Armed groups in eastern DRC continue to commit severe human rights abuses and profit from the exploitation of minerals, according to the United Nations. Congress included a provision in the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act that, among other things, directed SEC to promulgate disclosure and reporting regulations regarding the use of conflict minerals from the DRC and adjoining countries. The act also directed State and USAID to develop a strategy to address the linkages among human rights abuses, armed groups, the mining of conflict minerals, and commercial products. This report examines (1) company disclosures filed with SEC for the first time in 2014 in response to the SEC conflict minerals disclosure rule; and (2) State and USAID actions related to the U.S. conflict minerals strategy in the DRC region. This report also includes information on sexual violence in the DRC and three adjoining countries. GAO reviewed and analyzed relevant documents and data and interviewed officials from relevant U.S. agencies and nongovernmental, industry, and international organizations; and analyzed a random sample of company disclosures from the SEC database that was sufficiently large to produce estimates for all companies that filed. GAO also traveled to the DRC, Rwanda, and Burundi to conduct field work. According to a generalizable sample GAO reviewed, company disclosures filed with the Securities and Exchange Commission (SEC) for the first time in 2014 in response to the SEC conflict minerals disclosure rule indicated that most companies were unable to determine the source of their conflict minerals. Companies that filed disclosures used one or more of the four “conflict minerals”—tantalum, tin, tungsten, and gold—determined by the Secretary of State to be financing conflict in the Democratic Republic of the Congo (DRC) or adjoining countries. Most companies were based in the United States (87 percent). Almost all of the companies (99 percent) reported performing country-of-origin inquiries for conflict minerals used. Companies GAO spoke to cited difficulty obtaining necessary information from suppliers because of delays and other challenges in communication. Most of the companies (94 percent) reported exercising due diligence on the source and chain of custody of conflict minerals used. However, most (67 percent) were unable to determine whether those minerals came from the DRC or adjoining countries (Covered Countries), and none could determine whether the minerals financed or benefited armed groups in those countries. Companies that disclosed that conflict minerals in their products came from covered countries (4 percent) indicated that they are or will be taking action to address the risks associated with the use and source of conflict minerals in their supply chains. For example, one company indicated that it would notify suppliers that it intends to cease doing business with suppliers who continue to source conflict minerals from smelters that are not certified as conflict-free. a Covered Countries: Angola, Burundi, Central African Republic, the Democratic Republic of the Congo, the Republic of the Congo, Rwanda, South Sudan, Tanzania, Uganda, and Zambia. Department of State (State) and U.S. Agency for International Development (USAID) officials reported taking actions to implement the U.S. conflict minerals strategy, but a difficult operating environment complicates this implementation. The agencies reported supporting a range of initiatives including validation of conflict-free mine sites and strengthening traceability mechanisms that minimize the risk that minerals that have been exploited by illegal armed groups will enter the supply chain. As a result, according to the agencies, 140 mine sites have been validated and competition within conflict-free traceability systems has benefited artisanal miners and exporters. Implementation of the U.S conflict minerals strategy faces multiple obstacles outside the control of the U.S. government. For example, eastern DRC is plagued by insecurity because of the presence of illegal armed groups and some corrupt members of the national military, weak governance, and poor infrastructure. GAO is not making any recommendations. |
The ability to find, organize, use, share, appropriately dispose of, and save records—the essence of records management—is vital for the effective functioning of the federal government. In the wake of the transition from paper-based to electronic processes, records are increasingly electronic, and the volumes of electronic records produced by federal agencies are vast and rapidly growing, providing challenges to NARA as the nation’s record keeper and archivist. Furthermore, the Presidential Records Act gives the Archivist of the United States responsibility for the custody, control, and preservation of presidential records upon the conclusion of a President’s term of office. The act states that the Archivist has an affirmative duty to make such records available to the public as rapidly and completely as possible consistent with the provisions of the act. In response to these widely recognized challenges, NARA began a research and development program to develop a modern archive for electronic records. The final operational ERA system is to consist of the following six key functions: Ingest enables the transfer of electronic records from federal agencies. Archival storage enables stored records to be managed in a way that guarantees their integrity and availability. Records management supports scheduling, appraisal, description, and requests to transfer custody of all types of records, as well as ingesting and managing electronic records, including the capture of selected records data (such as origination date, format, and disposition). Preservation enables secure and reliable storage of files in formats in which they were received, as well as creating backup copies for off-site storage. Local services and control regulates how the ERA components communicate with each other, manages internal security, and enables telecommunications and system network management. Dissemination enables users to search descriptions and business data about all types of records and to search the content of electronic records and retrieve them. In 2001, NARA began developing policies and plans to guide the overall acquisition of an electronic records system. Upon completion of the design phase, the agency awarded a cost-plus-award-fee contract to Lockheed Martin Corporation in September 2005, worth $317 million, to develop the ERA system. The development contract is composed of six option periods—the first option lasting 2 years and all subsequent options each lasting 1 year (to cover any uncompleted planned work and/or additional new work). The ERA contract is currently in the fifth option period. Within this contract structure, NARA is to deliver ERA system capabilities in five separate increments. Each period of performance includes specific capabilities associated with one or more increments to be delivered. Increments will overlap to allow the analysis and design activities for the next increment to begin while the testing of the final release of the current increment is under way. Figure 1 illustrates the ERA program plan schedule prior to the recent change in program direction in July 2010 (as discussed later in this report). Table 1 summarizes the planned system capabilities to be delivered by increment. Since awarding the contract, NARA has made several modifications to the program schedule including, among other things, extending the first two option periods by 2 months and 7 months, respectively. NARA also reduced the period of performance for option period four by 6 months. Additionally, NARA stated that Increment 3 was completed in October 2010 and that they expect to complete Increment 4 by early-2011, both of which are later than the milestones established in program planning documents. Table 2 shows a comparison of the original and revised ERA schedules. Since 2002, we have reported and testified on the technical and programmatic challenges that NARA has experienced in acquiring the ERA system, as well as on additional key risks facing the program. Our most recent report, in June 2010, reported that the estimated cost for ERA through March 2012 increased to more than $567 million. For example, NARA reportedly spent about $80 million on the base increment, compared with its planned cost of about $60 million. According to agency and contractor officials, factors contributing to the increase include unanticipated complexity of the system being developed. In order to enhance NARA’s ability to complete the ERA development within reasonable funding and time constraints, we recommended that the agency ensure adequate executive-level oversight by maintaining documentation of investment review results, including changes to the program’s cost and schedule baseline and any other corrective actions taken as a result of changes in ERA cost, schedule, and performance. We further reported that, although NARA initially planned for the system to be capable of ingesting federal and presidential records in September 2007, the two system increments to support those records did not achieve initial operating capability until June 2008 and December 2008, respectively. In addition, a number of functions originally planned for the base increment were deferred to later increments, including the ability to delete records and to ingest redacted records. More notably, we reported that NARA had not detailed what system capabilities would be delivered in the final two increments; it also had not effectively defined or managed ERA’s requirements to ensure that the functionality delivered satisfies the objectives of the system. Although NARA established an initial set of high- level requirements, it lacked firm plans to implement about 43 percent of them. As a result, we recommended that NARA ensure that ERA’s requirements are being managed using a disciplined process. As a result of our most recent report, OMB is working with NARA to remedy the problems we highlighted related to the cost, schedule, and performance of the ERA system. Specifically, in July 2010, OMB directed NARA to halt all development activities by the end of fiscal year 2011 and develop an action plan to address our finding on the lack of defined system functionality for the final two increments of the ERA program and the need for improved strategic planning. In response, NARA has work under way to revise its program implementation plans and enter the operations and maintenance phase beginning in fiscal year 2012. For development work to be accomplished prior to this date, NARA is to prioritize existing requirements and develop realistic cost and schedule estimates to determine what can be accomplished by the deadline. In addition, NARA also plans to prioritize remaining outstanding requirements (that are to be accomplished under the ERA contract); identify other requirements not yet met by the system; and determine ERA operations and maintenance requirements. Despite changes in program direction, the Archivist noted that the essential goals of ERA would remain unchanged. He stated that, beginning in fiscal year 2012, ERA would fully support the transfer of electronic records to an archival repository, as well as access to and preservation of electronic archival records. To do this, the Archivist stated that the agency would work on those elements determined to be the highest priorities in fiscal year 2011. According to NARA, this may lead to a second phase of the ERA development in the future. Given the size and significance of the government’s investment in IT, it is important that projects be managed effectively to ensure that public resources are wisely invested. Effectively managing projects entails, among other things, pulling together essential cost, schedule, and technical information in a meaningful, coherent fashion so that managers have an accurate view of the program’s development status. Without meaningful and coherent cost and schedule information, program managers can have a distorted view of a program’s status and risks. To address this issue, in the 1960s, the Department of Defense developed the EVM technique, which goes beyond simply comparing budgeted costs with actual costs. This technique measures the value of work accomplished in a given period and compares it with the planned value of work scheduled for that period and with the actual cost of work accomplished. Differences in these values are measured in both cost and schedule variances. Cost variances compare the value of the completed work (i.e., the earned value) with the actual cost of the work performed. For example, if a contractor completed $5 million worth of work, and the work actually cost $6.7 million, there would be a negative $1.7 million cost variance. Schedule variances are also measured in dollars, but they compare the earned value of the completed work with the value of the work that was expected to be completed. For example, if a contractor completed $5 million worth of work at the end of the month, but was budgeted to complete $10 million worth of work, there would be a negative $5 million schedule variance. Positive variances indicate that activities are costing less or are completed ahead of schedule. Negative variances indicate activities are costing more or are falling behind schedule. These cost and schedule variances can then be used in estimating the cost and time needed to complete the program. Without knowing the planned cost of completed work and work in progress (i.e., the earned value), it is difficult to determine a program’s true status. Earned value allows for this key information, which provides an objective view of program status and is necessary for understanding the health of a program. As a result, EVM can alert program managers to potential problems sooner than using expenditures alone, thereby reducing the chance and magnitude of cost overruns and schedule slippages. Moreover, EVM directly supports the institutionalization of key processes for acquiring and developing systems and the ability to effectively manage investments—areas that are often found to be inadequate on the basis of our assessments of major IT investments. In 2005, OMB began requiring agencies, such as NARA, to fully implement EVM on major IT investments. Specifically, this guidance directs agencies to (1) develop comprehensive policies to ensure that their major IT investments are using EVM to plan and manage development; (2) include a provision and clause in major acquisition contracts or agency in- house project charters directing the use of an EVM system that is compliant with the American National Standards Institute (ANSI) standard; (3) provide documentation demonstrating that the contractor’ or agency’s in-house EVM system complies with the national standard; ( conduct periodic surveillance reviews; and (5) conduct integrated base reviews on individual programs to finalize their cost, schedule, and performance goals. s 4) Building on OMB’s requirements, in March 2009, we issued a guide on best practices for estimating and managing program costs. This guide highlights the policies and practices adopted by leading organizations to implement an effective EVM program. Specifically, in the guide, we identify 11 key practices that are implemented on acquisition programs of leading organizations. These practices include the need for organizational policies that establish clear criteria for which programs are required to use EVM, specify compliance with the ANSI standard, require a standard product-oriented structure for defining work products, require integrated baseline reviews, provide for specialized training, establish criteria and conditions for rebaselining programs, and require an ongoing surveillance function. In addition, we identify key practices that individual programs can use to ensure that they establish a sound EVM system, that the earned value data are reliable, and that the data are used to support decision making. In October 2002, NARA established the ERA Program Management Office, which has primary responsibility for managing the ERA acquisition. The ERA program falls within the oversight of the NARA IT Executive Committee and the Chief Information Officer (CIO). Specifically, the executive committee is comprised of senior NARA decision makers who manage NARA’s IT capital planning and investment control process and the NARA IT investment portfolio, which includes the ERA investment. The NARA CIO oversees management of the ERA program and is responsible for EVM implementation across the agency’s IT acquisitions. To support project managers in the execution of EVM, among other things, the CIO established the Capital Planning and Administration Branch to establish policy and guidance, analyze monthly project status reports, identify earned value trends, provide corrective action recommendations, and disseminate project information as appropriate. Furthermore, the ERA Program Director, who reports to the CIO, is responsible for the operational scope of work, performance, budget, and schedule of the program. Additionally, the NARA senior staff, which includes the Archivist and the Deputy Archivist, provide oversight and risk management as required. Figure 2 illustrates the organizational structure for the ERA program. NARA has, to varying degrees, established certain best practices needed to manage the ERA acquisition through EVM. Our work on best practices in EVM identified 11 key practices that are implemented on acquisition programs of leading organizations. These practices can be organized into three management areas: establishing a comprehensive EVM system, ensuring reliable earned value data, and using those data to make decisions. The ERA program fully met 2 of the 11 key practices for implementing EVM, partially met 7 practices, and did not meet 2 others. These weaknesses exist in part because NARA lacks a comprehensive EVM policy, as well as training and specialized resources. NARA also frequently replans the ERA program. Without effectively implement EVM, NARA has not been positioned to identify potential cost and schedule problems early and thus not been able to take timely actions to correct problems and avoid program schedule delays and cost in creases. Table 3 lists the 11 key EVM practices by management area and summarizes the status of NARA’s implementation of each practice. Define the scope of effort using a work breakdown stru cture. Identify who in the organization will perform the work. Schedule the work. Estimate the labor and material required to perform the w authorize the budgets, including management reserve. Determine objective measure of earned value. Develop the performance measurement baseline. Execute the work plan, and record all costs. Analyze EVM performance data, and record performance measurement baseline plan. Forecast estimates at completion. Take management action to mitigate risks. formance measurement baseline as changes occur. ●: The agency addressed all aspects of this EVM practice. ◐: The agency addressed some, but not all, aspects of this EVM practice. ◌: The agency did not address any aspects of this EVM practice. ources: GAO analysis of NARA and contractor data. The ERA program did not fully establish a comprehensive EVM system. Of the six key practices in this management area, the program fully implemented one, and partially met five. Specifically, the agency’s organization charts and contract work breakdown structure fully identified the personnel responsible for performing the defined work. However, critical weaknesses remain in the following other key practices: program Define the scope of effort using a work breakdown structure. The ERA program maintains a work breakdown structure that is consistent with work planned in the project schedule; however, this structure neither reflects the entire scope of the program, nor is it defined in such a way to provide meaningful understanding of the products or deliverables being developed. Specifically, the work breakdown structure did not include work planned for Increment 4 and beyond. Furthermore, the struc defined by program increment rather than by major program/system component (e.g., ERA base, EOP), and the work planned in these increments was not broken down in a standardized fashion, thus making i difficult to track common work elements across increments. Without a work breakdown structure that is comprehensive, product-oriented, and standardized, ERA cannot efficiently track and measure progress made on contractor deliverables. Schedule the work. The ERA project schedule had activities that were adequately sequenced; however, it also had a number of weaknesses tha undermined the quality of the established performance baseline. These weaknesses included an invalid critical path (the sequence of activities that, if delayed, impacts the planned completion date of the project); a lack of resources assigned to all activities; and the excessive or unjustified use of constraints, which impairs the program’s ability to forecast the impact of ongoing delays on future planned work activities. To the contractor’s credit, it is aware of many of the deviations from scheduling best practices and has controls in place to monitor them. However, these weaknesses remain a concern because the schedule se as the performance baseline against which earned value is measured, and any weaknesses impair the use of the schedule as a management tool. Estimate the labor and material required and authorize the budgets. The establishment of a sound baseline plan, which would include estima the labor and materials required to perform the work, was not thorough completed through an integrated baseline review. Although NARA performed integrated baseline reviews prior to exercising each option period, as well as after a major rebaseline, the most recent review, held i December 2009, showed that none of the corrective actions needed to mitigate program risks—including reducing a large amount of work not being measured objectively—had been taken. Without a fully completed integrated baseline review, NARA has not taken the proper steps to determine whether the baseline plan contains an acceptable level of risk and that significant risks have been mitigated. While the contractor has established management reserves to cover realized risks in the baseline plan and reports reserve levels to NARA on a monthly basis, the lack of a n sufficient review makes it diff reserve set aside is justified. icult to determine whether the amount of Determine objective measure of earned value. Objective measures were not always used for determining a majority of work planned. For example as of February 2010, approximately 17 percent of the program’s baseline budget was classified as nonobjective (also called level-of-effort). Our research shows that, if more than 15 percent of the baseline is measured using level-of-effort, then that amount should be scrutinized because it does not allow schedule performance to be measured. NARA identified use of nonobjective metrics as a concern in its most recent integrated baseline review; however, it did not take action to address this concern. Until NARA ensures that metrics used to measure the progress made on that ERA’s planned work elements are appropriate, it cannot be assured measurements of accomplishments are sufficiently credible. The ERA program did not adequately ensure that ERA’s earned value d were reliable. Of the three key practices in this management area, the program fully implemented one, partially met one, and did not meet the remaining one. Specifically, the program has processes in place to iden and record cost and schedule variances and review earned value data using monthly contractor EVM performance reports. In addition, the ERA program office reviews contractor EVM data on a regular basis to track contractor performance, including incorporating EVM data into monthly program management reviews. However, the program has not adequately tify recorded variances from the performance baseline or been able to forecas estimates at completion using EVM: Analyze EVM performance data and record variances. The contractor’s art of Increment 3, did not discuss the impact of the problem and monthly reports include justifications for cost and schedule var however, these justifications are not sufficiently detailed for NARA program management to fully understand the reasons for the varian and the contractor’s plan for resolving them. In particular, the justifications of variances for the base system a p comprehensive corrective actions to be taken. As a result, the program office cannot track and mitigate related risks. ugmentation work, a major Furthermore, the monthly reports also showed a number of anomalies that s raise questions regarding the reliability of the earned value data. Example are as follows: work was removed from the baseline without also removing s corresponding budget. This is an inappropriate EVM practice and it results in the appearance of favorable cost and schedule performa trends. Work was shown as fully completed in one month’s report but, in subsequent reports, the same work was reported as less than 100 percent complete. For example, Increment 3 development work was reported as 100 percent complete in July 2009, but 2 months later, in September 2009, it was reported as 10 percent complete. In another example, program support activities for Increment 3 were reported as 100 percent complete in August 2009, but in the subsequent month 49 percent complete. Dollars were reported as spent in a given month, but no work was reported as scheduled or completed. NARA program and contractor officials provided justifications for these anomalies, such as extension of the period of performance. However, these justifications were not always valid. In particular, program official s cited lagging invoices as a major contributor to these anomalies. As such, the reconciliation of estimated costs to actual costs was not reflected in the earned value reports until, in some cases, up to 15 months after the fact. Lagging invoices can create false positive or negative variances and, as such, the timely reconciliation of these costs is necessary for obtain reliable data. Until NARA resolve anomalies, it risks using inaccurate data to manage the program, improves its ability to assess contractor data and potentially resulting in additional cost overruns, schedule delays, and performance shortfalls. Forecast estimates at completion. The ERA pr ogram is unable to forecast osts at program completion based on the earned value data it receives c because these data reflect contractor performance trends in one increment, not the full development program. orm The ERA program did not effectively use earned value data to inf programmatic decisions. Of the two key practices, the program partia met one and did not meet the other practice. Specifically, the progr office included earned value performance trend data in monthly performance management review briefings. In addition, the cost and schedule drivers causing poor trends (as identified in the monthly contracto contained in the program risk registers. Nevertheless, critical weaknesses remain in this management area. Examples of those weaknesses are as follows: r reports) were generally consistent with the risks and issues Take management action to mitigate risks. NARA management did not take all necessary actions to mitigate risks. First, according to NARA officials, the CIO, Program Director, and contractor executives meet weekly and discuss cost and schedules issues when appropriate. However, NARA does not document the results of these briefings, and thus there i little evidence that this body has reviewed and approved cost and schedule issues. There is also little evidence that it identified corrective actions an d tracked them to closure. Second, the briefings to senior executives are inconsistent. For example, in January 2010, the program team reported the Program Director that unless Increment 3 work was replanned into In other briefings to senior NARA management and OMB, it was reported that the cost performance remained steady. crement 4, they anticipated a cost overrun of $2.0 million. However, in Moreover, while ERA earned value data trends are included in briefing materials provided to NARA senior executives, these cost and schedule performance trends are not Until NARA uses earned value data to make program decisions, it will be unable to effectively identify areas of concern and make recommendati to reverse negative trends. discussed in these management meetings. The weaknesses we identified in the three management areas exist, in part, because of a number of key factors: NARA-wide EVM policy: As we have previously reported, a comprehensive EVM policy is an important aspect of instituting a sound EVM program. NARA’s policy, established in 2005, outlines clear criteria for which IT programs are to use EVM. However, it does not require EVM training for senior executives with oversight responsibility, program managers, or relevant program staff responsible for contract management. The policy also does not require annual EVM system surveillance to program compliance with the industry standard. The ERA program offic provided documentation that a surveillance review was performed in April 2009; however, a number of outstanding corrective action items resulting from this review were not closed. Moreover, the program could not provide documentation to show that regular surveillance reviews were performed in past years. Without such policies, NARA is not positioned to ensure that ERA’s program staff have the appropriate skills to validate and interpret EVM data, and that its executives fully understand the data the y are given in order to ask the right questions and make informed decisions. Specialized program resources: The program office lacks the appropriate levels of skilled EVM personnel. In a past governmentwide review, we reported on successful EVM implementation on major IT projects at the Department of Homeland Security and the Federal Aviation Administration; these projects, all similar in size to ERA, had between and eight EVM specialists on staff to complete such activities. At this time the ERA program has two resident specialists on staff to oversee and monitor contractor performance for all components of the program; however, their responsibilities also extend beyond EVM to other areas of program control. Given the extent of earned value data anomalies we found, and the frequency with which the performance baseline is replanned, it is essential that the program office have the appropriate l of personnel in place to perform EVM analysis and oversight activities. Without an appropriate level of staffing, the program office will likely continue to experience issues in obtaining reliable earned value data. Acquisition strategy approach: Our body of work has shown that frequent rebaselines on a systems acquisition program allow real performance to be hidden, leading to distorted EVM data reporting. Th weaknesses associated with ERA’s performance baseline are largely due to frequent rebaselining. Program and contractor officials attributed this to ERA’s current acquisition strategy approach, which calls for NARA to renegotiate the contract (or replan the baseline) with every option As such, NARA is unable to produce a stable and comprehensive b that reflects all development work planned for the system. Instead, a baseline is crea ted for each option period—so work that was not completed in one option period gets replanned or removed in the subsequent one, thus resetting all past contractor cost and schedule performance. We agree that the program’s current implementation of the acquisition strategy is inherently incompatible with the use of EVM. Moreover, this environment sets the contractor up to be favorably positioned to receive a high award fee for each period of performance because the constant rebaselining makes it easier for the contractor to excel at achieving the objectives measured by the award fee evaluation process. In addition, it also makes the program highly inefficient because it must focus significant effort on program replanning instead of on the ERA system de work. Until NARA changes its acquisition strategy and establishes a comprehensive baseline for the program, its EVM practices will continue to be hampered with weaknesses, and its ability to obtain the insight needed to effectively manage the contractor will be impeded. ERA’s earned value performance trends do not accurately portray prog status, and our analysis of historical program trends indicate that future cost and schedule increases will likely be significant. Due to the limited implementation of EVM practices and the presence of data anomalies (both previously discussed), ERA’s earned value data reflect only a small portion of the work actually being performed. As such, we relied on historical ERA program performance data to construct a projected range n of costs at completion (see app. I for details). We previously reported, i June 2010, that NARA completed about 60 percent of ERA’s system requirements. If NARA pursues its original set of requirements, and the contractor maintains its current rate of productivity, it is unlikely that more tha end date of September 2011. We further project that the total cost overrun incurred at contract end could roughly be between $285 million and $33 million. n 65 percent of them will be completed by the revised contract Plans for the completion of the remaining development work once t contract ends are being reevaluated by NARA at the direction of OMB (as previously discussed). According to the Archivist, the essential goals of ERA will remain unchanged and may lead to a second phase of the development in the future. If NARA were to complete the full ERA system as originally designed, we project the development phase to be complete by March 2017 with a total cost overrun between $195 million and $433 million. We further project that the total cost overrun incurred at the end of the program life cycle will likely be between $205 million and $405 million. Table 4 shows our cost and schedule estimates as compared with NARA’s estimates for the program. Our projection assumes that past trends are indicative of future performance and does not take into account the degree of difficulty of the work being performed. This is critical because the work that remains includes system integration and testing activities that are complex and often the most challenging to complete based on our review of similar IT programs. Furthermore, in making our projection of total life cycle cost, we applied the same estimated operations and maintenance cost used b NARA. We did not validate the credibility of the operations and maintenance cost estimate. Based on these assumptions, we believe our rough estimates are conservative and that the final costs at completion could be even higher. In contrast, contractor-provided data from January 2009 to June 2010, show that the contractor has exceeded its cost target by $1.6 mi has not completed about $2 million worth of planned work. The contrac reported that the negative cost and schedule variances are largely due t unanticipated development work required to integrate specific commercial-off-the-shelf products into the base system and unplanned software code growth in key areas, including ingest orchestration and archive search capability. Based on current performance trends, the contractor estimates it will incur a $2.7 million overrun at the end of Increment 3. Overall, NARA has fallen short in its implementation of EVM to over and manage the ERA system acquisition. Most of the earned va controls needed for sound im Specifically, lue process plementation have yet to be fully established. the baseline for measuring contractor performance lacks sufficien accuracy and completeness to provide a meaningful basis for understanding performance; the performance data measured against a flawed baseline are not reliable and are further impaired by the extent of anomalies found in the contractor performance reports; taken together, this hampers NARA’s ability to produce reliable estimates of cost at completion; and e ability to take timely action to correct unfavorable results and trends is th constrained. Moreover, because senior executives do not discuss and use earned value trends to oversee this investment, the production of reliable EVM performance reports will continue to be a low priority to the progr office and ultimately the contractor. Many of the weaknesses found can be traced back to NARA’s inadequate l as agency-level EVM policies, training, and specialized resources, as wel to its acquisition strategy for the ERA program. Until NARA addresses these underlying issues, it is not positioned to optimize EVM as a management tool on this program. In addition, the program’s historical cost and schedule performance suggest that the ERA system, at full operational capability, will likely be deployed at least 67 months behind schedule (in March 2017) and that th total life cycle cost for the program could be at least $1.2 billion (a 21 percent increase). To improve NARA’s ability to effectively implement EVM on its ERA system acquisition program, we recommend that the Archivist of theU the current system development contract is active: nited States direct the NARA CIO to take the following five actions while RA program to establish a comprehensive baseline (through n integrated master schedule) for all remaining work on contract. Ensure that the ERA program obtains reliable EVM performance repo rts, taking into consideration the data anomalies and weaknesses identified in this report. Engage senior NARA and contractor leadership/oversight officials to direct attention to reversing current negative performance trends, as shown in the earned value data, and take action to mitigate the pote cost and schedule overruns. Include as part of its acquisition policy governing EVM requirements for (1) EVM training for senior executives and program staff re sponsible for E program’s EVM system to ensure its compliance with industry standardRA investment oversight and (2) ongoing surveillance of the ERA s. Ensure that the ERA program in place to perform EVM analysis and oversight activities. has the appropriate level of specialized staff Taking into consideration the new ERA program direction, we furthe recommend that the Archivist of the United States direct the CI the following three actions: Using a gap analysis of the work completed through fiscal year 2011, and the original ERA requirements set, determine and clearly define the remaining work that will be pursued in the future ERA system development phase (Phase 2). irect the ERA program to develop new cost and schedule estimates for a comprehensive Phase 2 baseline, as well as for the total program l In combination with the above action, this should provide the program with enough information to disclose to the Congress the exact work tha will be accomplished and the cost of that work. ife cycle. Upon completion of the above action, direct the ERA program to implement the EVM practices that address the detailed weaknes we identified in this report, taking i including nto consideration the criteria used, establishing a comprehensive Phase 2 baseline (through an integrated master schedule) that has been validated through an integrated baseline review and limits the use of nonobjective metrics; nsuring that reliable reports of EVM performance are being produced, e including records of work completed, forecasts of estimates at completion, and explanations/corrective actions for variances an anomalies; and engaging senior NARA leadership/oversight officials to ensure that earned value data are being used for decision-making purposes, cluding holding and documenting executive meetings to ensure that in cost and schedule risks/issues have been tracked to closure, negative performance trends are mitigated, and major updates made to the baseline have been validated through an integrated baseline review. In written comments on a draft of this report, which are reprinted in appendix II, the Archivist of the United States generally concurred with our recommendations and stated that NARA plans to address most of them in a near-term action plan. He further stated that NARA would be unable to address the final three recommendations in this plan since those were specific to a future ERA development effort. In addition, the Archivist shared two perspectives regarding our methodology used to project ERA program costs. First, NARA stated that it believes the true cost of ERA’s system development to be only $282 million, rather than our reported cost of $567 million, because NARA looks at total costs as two distinct parts: developmental costs versus nondevelopmental costs. Specifically, NAR considers costs such as project management, research and develop concept exploration and planning activities, and operations of the syst em to be nondevelopmental and thus excludes them from its projections. We disagree that this reflects the true cost of developing the system. True system development cost should include the costs for all program a ept cycle, including project management, research and development, conc exploration and planning activities. The projections we have made in the report reflect this. ctivities performed in the development phase of an acquisition’s life Second, NARA stated that our cost projections’ assumption that past trends are indicative of future performance does not hold true because of its cost category distinction (developmental versus nondevelopmental) and the impact of OMB’s July 2010 memo, which redirected the scope of the entire program and ends the current development work in September 2011. NARA further stated that, as a result, the agency cannot know now when new development efforts may start, or the scope or cost of such development. As discussed above, NARA’s cost distinction does not provide for a comprehensive estimation of system development costs; therefore, we believe our cost projections are sound. We agree with NARA concerning the impact of the change in program direction and believe the appropriate caveats pertaining to ERA’s p th reevaluated and that our projections were based on the completion of the full ERA system as originally intended. future were placed on our cost rojections in the report. Specifically, our report states that the plans for e completion of the remaining 35 percent of development work are being As agreed with your offices, unless you publicly announce the contents this report earlier, we plan no further d report date. At that time, we will send copies to the appropriate congressional committees, the Archivist of the United States, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. istribution until 30 days from the If you or your staff members have questions on matters discussed in report, please contact David Powner at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to (1) assess whether the National Archives and Records Administration (NARA) is adequately using earned value management (EVM) techniques to manage the Electronic Records Archives (ERA) acquisition and (2) evaluate the earned value data to determine ERA’s cost and schedule performance. To accomplish our first objective, we analyzed program documentation, including project work breakdown structures, project schedules, integrated baseline review briefings, risk registers, contractor performance reports, and monthly program management review briefings for the ERA program. Specifically, we compared program documentation with EVM and scheduling best practices as identified in GAO’s cost guide. We characterized the extent to which the program met each of the 11 practices as either fully implemented (all sub-elements of the practice were met), partially implemented (some but not all sub-elements were met), or not implemented (none of the sub-elements were met). To have fully implemented a key practice, the program must have implemented all characteristics of the practice. We also interviewed program and contractor officials (and observed program status review meetings) to obtain clarification on how EVM practices are implemented and how the data are used for decision-making purposes. To accomplish our second objective, we analyzed earned value data contained in contractor EVM performance reports, program budget reports sent to the Office of Management and Budget (OMB), as well as past GAO work on ERA costs and system requirements. To perform this analysis, we compared the cost of work completed with budgeted costs for scheduled work in the contractor performance reports over an 18-month period to show trends in cost and schedule performances. We determined that the earned value cost data were not sufficiently reliable to estimate the likely costs at contract completion. As a result, we developed an alternative methodology by using other historical ERA performance data to make cost projections at contract completion, as well as to make further cost and schedule projections about the system development phase beyond the contractor’s baseline plan. To do so, we used our past work to identify the percentage of ERA requirements completed through September 2010. Our alternative methodology was as follows: Completed requirements estimate: We divided the total number of completed requirements by the duration (in months) it took to complete them to calculate a productivity factor. We then multiplied this factor by the remaining duration of the contract to calculate our estimate of the percentage of requirements that will likely be completed at contract end. Low end of contract completion cost estimate range: We used the cost overrun incurred to complete the amount of requirements described above by the duration (in months) it took to complete them to calculate a burn rate of overrun dollars. We then multiplied the burn rate by the remaining duration to determine an estimated total overrun beyond what had already been incurred. High end of contract completion cost estimate range: We divided the current contract value by the total number of completed requirements to calculate an efficiency factor. We then multiplied this factor by our estimate of completed requirements at contract end (calculated as described in the first bullet) to determine our estimate. Development phase schedule estimate: We used the productivity factor to estimate the duration to complete 100 percent of the requirements (i.e., the development phase). Development phase cost estimate range: We applied the same general methodology as described above to determine both the low-end and high- end estimates. To generate our total life cycle cost estimates, we added the NARA- provided cost estimate for operations and maintenance to our estimated development phase costs. To assess the reliability of the budget cost data, we compared them with other available supporting documents (including financial reports to OMB); performed limited testing of the data to identify obvious problems with completeness or accuracy; and interviewed agency and contractor officials about the data. For the purposes of this report, we determined that the budget cost data were sufficiently reliable. We did not test the adequacy of the agency or contractor cost-accounting systems. Our evaluation of these cost data was based on what we were told by the agency and the information they could provide. We conducted this performance audit from March 2010 to January 2011 at NARA offices in the Washington, D.C., metropolitan area. Our work was done in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, those making contributions to this report included Carol Cha, Assistant Director; Neil Doherty; Ronalynn Espedido; Jason Lee; Lee McCracken; Karen Richey; and Niti Tandon. | Since 2001, the National Archives and Records Administration (NARA) has been working to develop an Electronic Records Archive (ERA) to preserve and provide access to massive volumes and all types of electronic records. However, in acquiring this system, NARA has repeatedly revised the program schedule and increased the estimated costs for completion from $317 million to $567 million. NARA is to manage this acquisition using, among other things, earned value management (EVM). EVM is a project management approach that, if implemented appropriately, provides objective reports of project status and unbiased estimates of anticipated costs at completion. GAO was asked to (1) assess whether NARA is adequately using EVM techniques to manage the acquisition and (2) evaluate the earned value data to determine ERA's cost and schedule performance. To do so, GAO compared agency and contractor documentation with best practices, evaluated earned value data to determine performance trends, and interviewed cognizant officials. NARA has, to varying degrees, established selected best practices needed to manage the ERA acquisition through EVM, but weaknesses exist in most areas. For example, the scope of effort in ERA's work breakdown structure is not adequately defined, thus impeding the ability to measure progress made on contractor deliverables. These weaknesses exist in part because NARA lacks a comprehensive EVM policy, training, and specialized resources and also frequently replans the program. As a result, NARA has not been positioned to identify potential cost and schedule problems early and thus has not been able to take timely actions to correct problems and avoid program schedule delays and cost increases. ERA's earned value data trends do not accurately portray program status due to the program's weaknesses in implementing EVM; however, historical program trends indicate that future cost overruns will likely be between $195 million and $433 million to fully develop ERA as planned and between $205 and $405 million at program end. In contrast, the contractor's estimated cost overrun is $2.7 million. Without more useful earned value data, NARA will remain unprepared to effectively oversee contractor performance and make realistic projections of program costs. GAO recommends, among other things, that NARA establish a comprehensive plan for all remaining work; improve the accuracy of earned value performance reports; and engage executive leadership in correcting negative trends. NARA generally concurred with GAO's recommendations. |
DOD submitted the first version of its long-term corrosion strategy to Congress in December 2003. DOD developed this long-term strategy in response to direction in the Bob Stump National Defense Authorization Act for Fiscal Year 2003. In November 2004, DOD revised its long-term corrosion strategy and issued its DOD Corrosion Prevention and Mitigation Strategic Plan. DOD strives to update its strategic plan periodically, most recently in February 2011, and officials stated the next update in planned for 2013. The purpose of DOD’s strategic plan is to articulate policies, strategies, objectives and plans that will ensure an effective, standardized, affordable DOD-wide approach to prevent, detect and treat corrosion and its effects on military equipment and infrastructure. In January 2008, the department first issued DOD Instruction 5000.67, Prevention and Mitigation of Corrosion on DOD Military Equipment and Infrastructure, which was canceled and reissued with the same title in February 2010.policy, assign responsibilities, and provide guidance for the establishment and management of programs to prevent or mitigate corrosion of DOD’s military equipment and infrastructure. This instruction describes legislative requirements and assigns the Corrosion Executives responsibility for certain corrosion prevention and control activities in their respective military departments. It requires the Corrosion Executives to submit information on proposed corrosion projects to the Corrosion Office with coordination through the proper military department’s chain of command, as well as to develop support, and provide the rationale for resources to initiate and sustain effective corrosion prevention and mitigation programs in each military department. According to statute and DOD guidance, the Director of the Corrosion Office is responsible for the prevention and mitigation of corrosion of DOD equipment and infrastructure. The Director’s duties include developing and recommending policy guidance on corrosion control, reviewing the corrosion-control programs and funding levels proposed by the Secretary of each military department during DOD’s internal annual budget review, and submitting recommendations to the Secretary of Defense regarding those programs and proposed funding levels. In addition, the Director of the Corrosion Office periodically holds meetings with the DOD Corrosion Board of Directors and serves as the lead on the Corrosion Prevention and Control Integrated Product Team. The Corrosion Prevention and Control Integrated Product Team includes representatives from the military departments, the Joint Staff, and other stakeholders who help accomplish the various corrosion-control goals and objectives. This team also includes the seven Working Integrated Product Teams which implement corrosion prevention and control activities. These seven product teams are organized to address the following areas: corrosion policy, processes, procedures and oversight; metrics, impact, and sustainment; specifications, standards, and qualification process; training and certification; communications and outreach; science and technology; and facilities. Appendix A of the DOD Corrosion Prevention and Mitigation Strategic Plan contains action plans for each product team, including policies, objectives, strategies, planned actions and results to date. To accomplish its oversight and coordination responsibilities, the Corrosion Office has ongoing efforts to improve the awareness, prevention and mitigation of corrosion of military equipment and infrastructure, including (1) hosting triannual corrosion forums; (2) conducting cost-of-corrosion studies; (3) operating two corrosion websites; (4) publishing an electronic newsletter; (5) working with industry and academia to develop training courses and new corrosion technologies; and (6) providing funding for corrosion-control demonstration projects proposed and implemented by the military departments. According to the Corrosion Office, these corrosion activities enhance and institutionalize the corrosion prevention and mitigation program within DOD. To receive funding from the Corrosion Office, the military departments submit project plans for their proposed projects that are evaluated by a panel of experts assembled by the Director of the Corrosion Office. The Corrosion Office generally funds up to $500,000 per project, and the military departments generally pledge matching funding for each project that they propose. The level of funding by each military department and the estimated return on investment are two of the criteria used to evaluate the proposed projects. Appendix D of the DOD Corrosion Prevention and Mitigation Strategic Plan includes instructions for submitting project plans, along with instructions for submission of final and follow-on reports. For the project selection process, the military departments submit preliminary project proposals in the fall and final project proposals in the spring, and the Corrosion Office considers the final proposals for funding. Projects that meet the Corrosion Office’s criteria for funding are announced at the end of the fiscal year. Figure 1 provides additional details of the project selection process for a given fiscal year. As part of the project selection process, DOD’s strategic plan states that the estimated return on investment, among other things, must be documented for each proposed project. The total cost for each project is based on both the funding requested from the Corrosion Office and the funding provided by the military departments. DOD records reflect varying estimated returns on investment and savings for each proposed project submitted by the military departments. According to the Corrosion Office, a senior official within each military department reviews the proposed projects, including the estimated return on investment, before the project plans are submitted to the Corrosion Office. Section 2228 of Title 10 of the United States Code requires the Secretary of Defense to include the expected return on investment that would be achieved by implementing the department’s long-term strategy for corrosion, including available validated data on return on investment for completed corrosion projects and activities, in his annual corrosion-control budget report to Congress. DOD’s strategic plan stipulates three reporting requirements for approved projects. According to Corrosion Office officials, the project managers typically are responsible for completing the reporting requirements. The requirements are to: (1) provide bimonthly or quarterly project updates until the project is completed, (2) submit a final report as soon as each project is completed, and (3) submit a follow-on report within two years after a project is completed and the technology has transitioned to use within the military department. Figure 2 provides a breakout of the number of projects that have reached various reporting milestones as of November 2012. There were 105 infrastructure-related corrosion projects funded from fiscal years 2005 through 2012, in which 41 projects had reached the milestone for submitting final and follow- on reports, including return-on-investment reassessments; 39 projects had reached only the milestone for submitting final 25 projects were not yet complete, thus they have not reached the milestone for submitting final or follow-on reports. In September 2012, we reported that the Corrosion Office performs an analysis to determine the average return-on-investment estimates for projects that it cites in its annual corrosion-control budget report to Congress. Additionally, we reported that the Corrosion Office did not use the most up-to-date data for the projects’ returns on investment or provide support for the projects’ average return on investment that was cited in its fiscal year 2013 corrosion-control budget report to Congress. We recommended that DOD provide an explanation of its return-on- investment methodology and analysis, including the initial and, to the extent available, the reassessed return-on-investment estimates. However, DOD did not agree with our recommendation. In its written comments, DOD generally restated the methodology in its strategic plan, which the military departments use to estimate the projected return on investment of each project. DOD did not provide any additional reasons why it did not use current return-on-investment estimates in its report to Congress. Additionally, in our December 2010 review, we recommended that DOD update applicable guidance, such as Instruction 5000.67, to further define the responsibilities of the Corrosion Executives to include more specific oversight and review of corrosion project plans before and during the selection process. However, DOD did not agree with our recommendation and stated that DOD-level policy documents are high- level documents that delineate responsibilities to carry out the policy and that specific implementing guidance is provided through separate documentation. Further, in some of our earlier work, we reported that the secretaries of the military departments did not have procedures and milestones to hold major commands and program offices accountable for achieving strategic goals to address corrosion regarding facilities and weapons systems. DOD agreed with our recommendations to define and incorporate measurable, outcome-oriented objectives and performance measures into its long-term corrosion mitigation strategy that show progress toward achieving results. Additionally, in May 2013 GAO issued a separate report assessing DOD’s and the military departments’ strategic plans. All the related GAO products are listed at the end of this report. DOD has not ensured that all final and follow-on reports on the results of its infrastructure-related corrosion projects were submitted as required by its strategic plan. As of November 2012, our review found that project managers had not submitted the required final reports for 50 of the 80 projects (over 60 percent) funded from fiscal years 2005 through 2010. Also, for 41 of the 80 projects that were funded from 2005 through 2007, we found that the project managers had not submitted the required follow- on reports for more than a third of the projects (15 of the 41 projects). DOD’s Corrosion Office, the military departments’ Corrosion Executives, and the military departments’ project managers cited various reasons for not meeting reporting milestones. DOD’s Corrosion Office has not effectively used its existing authority to hold project management offices accountable for submitting required reports at prescribed milestones and the office lacks an effective method for tracking reports submitted by the project managers. Moreover, DOD has not provided clear guidance to the military departments’ Corrosion Executives on their responsibilities and authorities for assisting the Corrosion Office in holding their project management offices accountable for submitting reports for their infrastructure-related corrosion projects. DOD has invested more than $68 million in 80 infrastructure-related corrosion projects funded from fiscal years 2005 through 2010, but project managers have not submitted all of the required reports on whether the corrosion-control technologies are effective. DOD’s strategic planstates that project plans should include a milestone schedule for reporting, including quarterly status reports, final reports and follow-on reports. According to Corrosion Office officials, if a project is approved, a quarterly status report is required starting the first week of the fiscal quarter after the contract award and every three months thereafter until the final report is submitted. Also, DOD’s strategic plan requires a final report at project completion, and requires a follow-on report two years after project completion and transition to use within the military departments. According to Corrosion Office officials, these reports provide valuable information on the results of corrosion projects and in planning future projects. Corrosion Office officials stated that project managers must submit final reports at project completion, which is typically within two years after the receipt of the funding of each project. As stipulated in DOD’s strategic plan, final reports should include certain content, such as an executive summary, lessons learned, recommendations, and conclusions. However, we found that 50 of the 80 required final reports (63 percent) for projects funded in fiscal years 2005 through 2010 had not been submitted. Table 1 shows the status of final reports submitted by each service for infrastructure-related projects. DOD’s strategic plan also requires that follow-on reports be submitted within two years after a project is completed and transitioned to use in the military department. According to Corrosion Office officials, this transition period includes up to one year to implement the technology in a military department. Corrosion Office officials also told us that they expected the follow-on reports to be submitted within five years of a project’s initial funding. Therefore, follow-on reports for 41 completed projects funded in fiscal years 2005 through 2007 were due on or before the end of fiscal year 2012. We found that project managers had not submitted 15 of the 41 required follow-on reports (37 percent). DOD’s strategic plan states that the follow-on reports should include an assessment of the following areas: project documentation, project assumptions, responses to mission requirements, performance expectations, and a comparison between the initial return-on-investment estimate included in the project plan with the new estimate. Table 2 shows the status of follow-on reports submitted by each service. In Appendix III of this report, we provide details of the returns on investment for all follow-on reports that were submitted. According to officials in the Corrosion Office, final and follow-on reports are used to assess the effectiveness of the corrosion projects and determine if continued implementation of the technology is useful. As Corrosion Office officials review project managers’ final reports, they stated that they focus on any lessons learned, technical findings, conclusions and recommendations, and whether the results from the report should trigger follow-on investigations of specific technology and a review for broader applications of the technology. Officials stated that they review follow-on reports to assure that necessary implementation actions have been taken and to review changes in the return-on- investment estimates. The military departments and Corrosion Office provided various reasons to explain why project managers did not complete and submit mandatory final and follow-on reports within expected timeframes. For example, officials at the Army Engineering Research Development Center, Construction Engineering Research Laboratory—who are the project managers for Army infrastructure projects—stated that funding challenges, problems with contractor performance, and personnel issues contributed to delays in completing the final reports, but acknowledged that it was their responsibility to reduce their longstanding backlog. Additionally, according to the Navy’s Corrosion Executive, officials of the Naval Facilities Engineering Command (Engineering Support Center)— who are the project managers for Navy infrastructure projects—did not have sufficient funding to complete and submit all required reports. Finally, according to a Corrosion Office official, the final report for the one Air Force fiscal year 2005 project was not submitted because the project manager did not complete it before retiring. Additionally, Corrosion Office officials cited other reasons that a project manager may be late in completing the required reports, such as lengthy coordination processes and the lack of priority that military departments’ officials place on completing required reports. The officials stated that they expect the military departments’ project managers to complete final reports within two years after receipt of funds, and it is the military departments’ responsibility to plan so that funding is available to complete all required reports. To assist the military departments with their responsibility, the Corrosion Office in fiscal year 2011 offered personnel and funding resources to the military departments to conduct the return- on-investment reassessments needed to complete follow-on reports. According to the Corrosion Office, only the Navy accepted the funds to complete all but one return-on-investment reassessment. According to an official in the Army Corrosion Executive’s office, he informed the project managers about the additional funding, but no one accepted the offer. We found at least four fiscal year 2006 projects where Army project managers did not use the available funding to complete and submit the required reports. Officials from the Army’s project management office told us that the project managers did not accept the additional funding to complete the 2006 projects because they had some work performance issues with the contractor assigned to complete the return-on-investment reassessments. In April 2012, these officials told us that follow-on reports for three projects were written but have not been submitted to the Corrosion Office, and the remaining follow-on report was still under development. As of November 2012, we found that these reports, which were due by the end of fiscal year 2011, still had not been submitted to the Corrosion Office. Further, the Air Force did not complete the follow-on report for its one corrosion project funded in fiscal year 2005. According to Corrosion Office officials, they did not require the Air Force to complete the follow-on report for this project because the demonstration was successful and the technology was implemented elsewhere within DOD. Corrosion Office officials told us that they track each corrosion project’s progress and review submitted final and follow-on reports for findings and broad application of corrosion-prevention techniques and approaches, including changes in the project’s initial and reassessed return-on- investment estimates. However, the Corrosion Office’s tracking system is limited and does not record the reason for late reporting or set new reporting deadlines. According to Section 2228 of Title 10 of the United States Code, the Secretary of Defense is required to report specific information including the expected return on investment that would be achieved by implementing the department’s long-term strategy for corrosion, including available validated data on return on investment for completed corrosion projects and activities, in his annual corrosion- control budget report to Congress. The Standards for Internal Control in the Federal Government require federal managers to establish internal control activities, such as controls over information processing, segregation of duties, and accurate and timely recording of transactions and events—including pertinent information to determine whether they are meeting their goals—to help ensure that management’s directives are carried out and managers achieve desired results through effective stewardship of public resources. Specifically, the Corrosion Office employs a contractor to maintain electronic records about all corrosion projects. Corrosion Office officials stated that project managers submit copies of their reports to the Corrosion Office and to the respective Corrosion Executive. On a monthly basis, the contractor checks each project’s records to determine if the project managers have submitted the required reports. If a project manager has not submitted a required report, the contractor notifies the Corrosion Office and that office contacts the relevant project manager and that manager’s Corrosion Executive. At that point, a Corrosion Office official encourages project managers to submit the report as soon as possible, but the Corrosion Office does not record a reason for late reporting and does not set a new reporting deadline. Also, the Corrosion Office officials stated that they elevate discussions about late filers in the three forums held each year that include meetings between Corrosion Office officials and Corrosion Executives. However, the Corrosion Office’s tracking system does not require that the project managers include certain information, such as stating reasons for missing a reporting deadline and identifying a revised deadline for submitting their reports. Additionally, the format developed by the Corrosion Office for completing the follow-on reports does not include a data field that would document when the project managers submitted their follow-on reports to the office. By not adopting an enhanced tracking system that includes revised deadlines, among other things, the Corrosion Office is unable to effectively monitor whether project managers are working toward new timeframes to complete overdue reports. Without effective tracking, the Corrosion Office will allow a number of project managers to continue the practice of not submitting the required reports and project managers will not fully inform decision makers of the latest outcomes of the corrosion- control projects. Section 2228 of Title 10 of the United States Code requires the Secretary of Defense to develop and implement a long-term corrosion strategy that should include, among other things, implementation of programs to ensure a focused and coordinated approach to collect, review, validate, and distribute information on proven corrosion prevention methods and products. In response to this requirement, the Corrosion Office oversees corrosion projects and uses routine communication methods and follow- up to encourage the project management offices to submit the required reports, but the office is not employing other options that would hold project management offices accountable for reporting milestones. For example, Corrosion Office officials stated that they have initiated telephone conversations and e-mails to project managers to reemphasize reporting requirements and have had limited success in obtaining some of the outstanding reports. However, there are other options beyond routine communications that the Corrosion Office could take to make project managers accountable for submitting timely reports, such as using funding incentives or changing evaluation criteria for project selection. Corrosion Office officials told us that they considered holding back funding for future projects from project management organizations that missed reporting deadlines, but they chose not to implement this action because it could delay progress in addressing corrosion control within the department. Although the Corrosion Office in 2011 offered the military departments additional funding to complete and submit follow-on reports, Corrosion Office officials stated that they would not set aside a portion of its annual funds in the future to assist project management offices in the completion of outstanding reports due to uncertainty in annual funding. Further, a senior Corrosion Office official stated that the office has considered but not adopted criteria for new projects that would include a project management office’s past reporting performance as an indicator for assessing corrosion project plans. DOD’s strategic plan refers to factors that are used by the evaluation panel to assess project plans and determine which to approve and fund. The evaluation factors, among other things, include whether the proposed project can be completed within a two-year timeframe, the risk associated with the project, and the estimated return on investment. The senior Corrosion Office official told us that the office has considered including a factor that would assess a project management office’s history of reporting performance as a criterion for deciding whether to approve and fund the office’s future projects; however, the office decided not to do so. The official stated that a successful project is one that reduces the impact of corrosion on weapon systems and/or infrastructure, and a project’s report in and of itself does not contribute to the success of the project. However, late submissions of reports could delay communication of project outcomes as planners are considering funding new projects, as well as limit key information that should be included in the annual corrosion-control budget report to Congress. The Corrosion Office has not implemented other options to better ensure that project managers consistently submit required reports. Internal control standards emphasize the importance of performance-based management to ensure program effectiveness, efficiency, and good stewardship of government resources. Without using its existing authorities for oversight and coordination to identify and implement possible options or incentives for addressing the various funding, personnel, or other reasons cited by project management offices for not meeting reporting milestones, the Corrosion Office may be missing opportunities to effectively reduce the number of outstanding reports, enforce requirements, and ensure that the valuable information in past projects is known and appropriately documented. The three military departments’ Corrosion Executives work with project managers for the infrastructure-related corrosion projects to ensure that the reporting requirements are being met; however, they have not taken effective actions to ensure that all project managers submit their required reports on a timely basis. DOD Instruction 5000.67 describes responsibilities for Corrosion Executives, such as being the principal points of contact for each military department to the Director of the Corrosion Office, developing, supporting, and providing the rationale for resources for initiating and sustaining effective corrosion prevention and control in the department, evaluating the effectiveness of each department’s program, and establishing a process to collect information on the results of corrosion prevention and control activities. While DOD’s strategic plan and other guidance—such as its corrosion instruction—identify the Corrosion Executives’ overall role in the management of each military department’s corrosion prevention and control program, the Corrosion Executives do not have clearly defined roles for holding their project managers accountable for submitting required reports. During our discussions with the military departments’ Corrosion Executives, we found that each executive varied in describing the extent of his work with corrosion project managers to ensure that the required reports are completed. For example, officials within the office of the Army’s Corrosion Executive told us that they are involved in all aspects of the corrosion demonstration project and receive updates and reports from the Army’s project managers. Also, these officials stated that they are in the process of developing additional policy on facilities and other infrastructure to improve the corrosion project process and provide an Army funding mechanism to cover costs of reporting after expiration of initial project funding. However, for the infrastructure-related corrosion projects, the other two military departments have not been as involved as the Army in ensuring that project managers submit required reports. For example, the Air Force’s Corrosion Executive stated that he coordinates with the Corrosion Office to track outstanding reports and can task the project managers to complete the required reports by going through the appropriate chain of command. Also, the Navy’s Corrosion Executive told us that he maintains a level of awareness on the status of projects’ reports, but does not play an active role in the submission of project reports because project managers have the responsibility to submit reports to the Corrosion Office. DOD’s strategic plan and instruction assign specific responsibilities to the Corrosion Executives; however, these documents do not clearly define a role for the Corrosion Executives in ensuring that all project managers submit mandatory reports. Without clearly defined responsibilities for the Corrosion Executives to help ensure required reporting, the Corrosion Executives may not take a leading role in holding project managers accountable for completing and submitting mandatory reports. If a number of project managers continue to be late in completing mandatory reports, decision makers are unlikely to be fully informed about whether implemented projects used effective technology to address corrosion issues and whether this technology could have broader uses throughout the military departments’ installations. The Corrosion Office maintains data for its infrastructure-related corrosion projects, but the office has not updated all of its records to accurately reflect the return-on-investment estimates that are provided in the military departments’ follow-on reports. The data maintained by the Corrosion Office includes the financial investments provided by the Corrosion Office and the military departments, the estimated savings expected, and the calculated return-on-investment estimates for all of the military departments’ funded and unfunded corrosion projects. Additionally, for each project, the Corrosion Office maintains data on whether the project managers have completed and submitted the required follow-on report and the value of the reassessed return-on-investment estimate in that follow-on report. The follow-on report shows, among other things, a comparison of the new estimate and the initial return-on-investment estimate included in the project plan. According to Corrosion Office officials, the data contained in its records system are used for reporting purposes, both internally and externally, such as the stated estimated returns on investment that are summarized in DOD’s annual corrosion budget report to Congress. According to Standards for Internal Control in agencies should use internal controls that the Federal Government,provide a reasonable assurance that the agencies have effective and efficient operations, and have reliable financial reports and other reports for internal and external use. Further, this guidance requires, in part, controls over information processing, and accurate and timely recording of transactions and events, to help ensure that management’s directives are carried out and managers achieve desired results through effective stewardship of public resources. During our review, we found differences between the initial return-on- investment estimates included in project plans and the initial estimates in the Corrosion Office’s records for 44 of the 105 projects (42 percent). The Corrosion Office provided reasons for correcting data. Specifically, according to the Corrosion Office officials, there were two main reasons for these differences:(1) funding-level changes between the estimate included in initial project plan and funding provided when the project was approved; and (2) incorrect computations of the estimated returns on investment by the project managers that required the Corrosion Office to recalculate the estimate to ensure consistency and accuracy. However, when comparing the reassessed return-on-investment estimates included in the projects’ follow-on reports with the reassessed estimates in the Corrosion Office’s records, we found that the Corrosion Office had not updated all of its records with the return-on-investment estimates from the follow-on reports. Specifically, we found that for 5 of 25 projects (20 percent) funded in fiscal years 2005 through 2007, the Corrosion Office had not updated its records to reflect the reassessed return-on- investment estimates included in the projects’ follow-on reports. The return-on-investment estimates for these 5 projects were from outdated sources, such as project plans and final reports. Specifically, the return- on-investment estimates for 3 Army projects were taken from final reports that had been submitted to the Corrosion Office in June 2007. Also, the return-on-investment estimates for one Army and one Navy project were from the project plans that had been submitted to the Corrosion Office in June 2004 and October 2004, respectively. Table 3 identifies the 5 projects funded in 2005 that had discrepancies in data. While the Corrosion Office has created records to track the estimated returns on investment of infrastructure-related corrosion projects, we found that the office has not adopted a best practice to maintain reliable data with accurate and timely information throughout its records. The Corrosion Office may use this return-on-investment data in its reporting, both internally and externally, such as in DOD’s annual corrosion budget report to Congress. Additionally, in September 2012, we reported that the Corrosion Office did not use current data for the projects’ returns on investment or provide support for the projects’ average return on investment. Without accurate and timely return-on-investment estimates maintained in the Corrosion Office’s records for corrosion projects, Congress and DOD corrosion-control managers may not have sufficient and reliable information about returns on investment for their oversight of completed projects. All the military departments’ Corrosion Executives use mechanisms— such as product team meetings, briefings, conferences, and site visits— to collect and disseminate information on infrastructure-related corrosion activities within their departments. Additionally, the Corrosion Executives host information sessions during the triannual DOD corrosion forums to discuss their corrosion-related issues. However, in our interviews with installation officials who were involved with corrosion work, slightly more than half of the officials were unaware of DOD’s Corrosion Office, their respective Corrosion Executive, or the training, information, and other resources available through the related offices. According to the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009, Section 903, the Corrosion Executive at each military department is, among other things, responsible for coordinating corrosion prevention and control activities with the military department and the Office of the Secretary of Defense, the program executive officers, and relevant major subordinate commands. Additionally, DOD Instruction 5000.67 directs each Corrosion Executive, in coordination with the proper chain of command, to establish and maintain a process to collect information on the results of corrosion- control activities for infrastructure within its department. Further, the DOD Corrosion Prevention and Mitigation Strategic Plan includes a communications goal to fully inform all levels of DOD about all aspects of corrosion work and states that the rapid and reliable exchange of information is the core of DOD’s new corrosion-control culture. Also, each military department developed documents for corrosion control and prevention that acknowledged the importance of communication on corrosion control. Likewise, internal controls have shown that organizations benefit from communicating timely information to management and others to help them achieve their responsibilities. During this review, we found that the military departments’ Corrosion Executives use various mechanisms to collect and disseminate corrosion- related information within each department’s chain of command. Additionally, we learned that the Corrosion Executives have encountered challenges in ensuring that information about their infrastructure-related corrosion-control initiatives reaches all relevant service-level officials. Specifically, each military department identified the following mechanisms and challenges: Army—In the 2012 U.S. Army Corrosion Prevention and Control Strategic Plan, the department established an Army Corrosion Board and an Army Corrosion Integrated Product Team to address corrosion issues. According to officials in the Corrosion Executive’s office, the board has held its first organizational meeting and the integrated product team meets as needed, often virtually. Additionally, the officials explained that they communicate key information on corrosion of Army facilities and other infrastructure through the relevant Army offices in the chain of command for installations, using data calls. However, one Corrosion Executive official stated that the Army does not have a formal process to communicate directly to officials in the field about lessons learned or best practices for addressing corrosion of facilities and other infrastructure. The Army’s strategic plan includes the goal to address poor communication and outreach that may hinder corrosion-control solutions from being implemented in the field. Navy—The Navy’s Annual Report on Corrosion for Fiscal Year 2011 states that a concerted awareness program is one of the cornerstones of improving communications about corrosion control and prevention within the Department of the Navy, which includes the United States Navy and the United States Marine Corps. The Corrosion Executive chairs the department’s Corrosion Cross-Functional Team, an internal group of subject matter experts and relevant command officials, to serve as the primary method for coordinating within the department. Additionally, the Navy’s Corrosion Executive noted that he works within the department’s applicable chain of command for corrosion issues for facilities and other infrastructure. Further, the Corrosion Executive stated that the office communicated its roles and responsibilities through information provided in regular department communications, such as bulletins, briefings and conferences; and also through site visits and assessments. However, he noted that the frequency of opportunities for conferences and site visits will be limited in the future due to budget constraints. The Navy’s strategic plan for corrosion notes that it will continue to use communications as a tool in its corrosion-control efforts. Air Force—In the May 2012 Air Force Enterprise Corrosion Prevention and Control Strategic Plan, the department acknowledged that facilities and other infrastructure organizations have not been integrated into the department’s corrosion program. In its strategic plan, the Air Force highlighted the need to establish lines of communication, structures, and process to ensure that facilities incorporate appropriate corrosion control throughout each life cycle. Also, the Corrosion Executive stated that the department in June 2012 created a Corrosion Control and Prevention Working Group in which he serves as the lead and meets regularly with the working group members from the Air Force’s major commands and relevant components. According to the Corrosion Executive, the means for disseminating and collecting information from the department’s installations are the service organizations within the chain of command for the affected facilities and other infrastructure. He also stated that the service’s training curriculum will incorporate important information as needed. During our review, managers and other public works officials at 16 of 31 installations stated that they were not familiar with the Corrosion Office.Officials also told us that their installations could benefit from the additional information on corrosion control and prevention offered by these offices. However, Corrosion Executives stated in interviews that they disseminate corrosion information through each department’s chain of command. In response to our questions, installation officials provided views in the following areas: Awareness of corrosion offices—In response to our questions of officials who are responsible for installation maintenance and would be involved in corrosion-control activities, officials at 16 of 31 installations stated that prior to our work they were not aware of the Corrosion Office or the relevant military department’s Corrosion Executive. In addition, officials at 24 of 29 installations stated they had not contacted the Corrosion Office about their corrosion work in the last three years.that they had not contacted the respective Corrosion Executive during the same period. However, officials at 23 of 29 installations stated that they had contacted their services’ installation management command or major commands about corrosion work in the last three years. At least four officials qualified their responses to this question by stating that other officials might be more knowledgeable about specific documents due to the nature of their positions. Interest in additional information—Many installation officials stated interest in receiving additional information about corrosion resources. For example, more than half of the interviewed officials (17 of 31) stated that the Corrosion Office or the relevant military department’s Corrosion Executive could provide more communications and enhance awareness about corrosion issues or corrosion-related resources. An identical number of officials stated that DOD’s and the military departments’ corrosion-control offices could support corrosion-related training as a useful resource for installations. Specifically, an Army installation official noted that it would be beneficial for the military services and DOD to disseminate information about the Corrosion Office and the military departments’ Corrosion Executives, including their roles and responsibilities, and the assistance they can provide. Other suggestions—Officials at installations noted other suggestions for exchanging information about installations, such as: holding regular forums and highlighting opportunities for contact with counterparts at other installations, having a centralized source for accessing corrosion-related information, disseminating case studies or best practices relevant to DOD, enhanced use of existing service- issued newsletters, and planning conferences or communities of practice. In addition, five respondents suggested providing important corrosion-related information to the service headquarters, regional command, or management commands for distribution to the installations. Additionally, in interviews at the services’ installation management commands, we found officials who stated similar concerns about communications. Officials from installation management commands stated that they had little contact with Corrosion Executives. For example, during our interview with one Air Force major command, a command official stated that the most recent information he had about the Air Force’s Corrosion Executive was from 2008. Another major command’s response did not include the Corrosion Executive as an organization it interacts with on corrosion issues. Similarly, officials at three different locations—the Commander of Navy Installations Command, the Marine Corps Installation Command, and the Army’s Installation Management Command Headquarters—stated that they had limited or no interaction on infrastructure issues with the Corrosion Office and their respective Corrosion Executive.like to receive information on training by the Corrosion Office regarding corrosion of infrastructure, and that the best channel for the information would be through the Assistant Chief of Staff for Installation Management, an office that works with the Army’s Corrosion Executive. In addition, the Army official stated that he would In evaluating communications for corrosion issues for facilities and other infrastructure, we found that all relevant service officials do not receive key information because the military departments’ Corrosion Executives do not have a targeted communication strategy for their military department and an accompanying action plan to ensure frequent communications between Corrosion Executives and all service officials involved in corrosion activities for facilities and other infrastructure. The military departments mention communication in their strategic plans, but they do not have specific steps for communicating corrosion-control information for facilities and other infrastructure at every level. Our prior work on federal organizations identified key practices and implementation steps for establishing a communication strategy to create shared expectations and report related progress. communication strategy and accompanying action plan, the Corrosion Executives cannot ensure that service managers of facilities and other infrastructure will have access to all information and resources available for dealing with corrosion and are aware of the most effective and efficient methods for corrosion control. GAO, Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations, GAO-03-669 (Washington, D.C.: July 2, 2003). submitted when due. In fact, the Corrosion Office has not adopted methods to enhance tracking, such as recording the reasons for missed reporting deadlines, new reporting deadlines, and the submission dates for follow-on reports. Further, although the Corrosion Office encourages project managers to complete outstanding reports, it has not exercised its existing oversight and coordination authorities to identify and implement possible options or incentives for addressing the various funding, personnel, or other reasons cited by project management offices for not meeting reporting milestones. Also, DOD’s strategic plan and other guidance do not clearly define a role for the military departments’ Corrosion Control and Prevention Executives (Corrosion Executives), who could assist the Corrosion Office, in holding the military departments’ project management offices accountable for submitting infrastructure- related reports in accordance with DOD’s strategic plan. Without effective actions to ensure timely submission of reports, decision makers may be unaware of potentially useful technologies to address corrosion. Moreover, the Corrosion Office is not always updating its records to ensure accurate information is maintained on reassessed return-on- investment estimates for infrastructure-related corrosion projects. Without accurate return-on-investment estimates for corrosion projects, Congress and DOD corrosion-control managers may not have sufficient information about returns on investment for their oversight of completed projects. Finally, slightly more than half of the installation officials (16 of 31 officials) whom we interviewed were unaware of DOD’s Corrosion Office, their respective Corrosion Executive, or the training, information, and other resources available through the related offices. Without a targeted communication strategy and accompanying action plan, the military departments’ Corrosion Executives cannot ensure that managers of facilities and other infrastructure will have access to all information and resources available for dealing with corrosion and are aware of the most effective and efficient methods for corrosion control. We are making five recommendations to improve DOD’s corrosion prevention and control program: To improve accountability for reporting the results of corrosion-control demonstration projects affecting DOD infrastructure, we recommend that the Under Secretary of Defense for Acquisition, Technology, and Logistics direct the Director of the Office of Corrosion Policy and Oversight to take steps to enhance reporting and project tracking, such as noting the reasons why project management offices missed a reporting deadline and including any revised reporting deadlines for final and follow-on reports. To improve the military departments’ submission of completed reports for infrastructure-related corrosion-control demonstration projects at prescribed milestones, we recommend that the Under Secretary of Defense for Acquisition, Technology, and Logistics direct the Director of the Office of Corrosion Policy and Oversight to use the office’s existing authority to identify and implement possible options or incentives for addressing the various funding, personnel, and other reasons cited by project management offices for not meeting reporting milestones. Further, to provide greater assurance that the military departments will meet reporting milestones for future projects, we recommend that the Under Secretary of Defense for Acquisition, Technology, and Logistics— in coordination with the Director of the Office of Corrosion Policy and Oversight—revise corrosion-related guidance to clearly define a role for the military departments’ Corrosion Control and Prevention Executives to assist the Office of Corrosion Policy and Oversight in holding their departments’ project management offices accountable for submitting infrastructure-related reports in accordance with the DOD Corrosion Prevention and Mitigation Strategic Plan. To ensure that Congress, DOD and officials of the military departments’ infrastructure-related corrosion activities have the most complete and up- to-date information, we recommend that the Under Secretary of Defense for Acquisition, Technology, and Logistics direct the Director of the Office of Corrosion Policy and Oversight to take actions to ensure that its records reflect complete, timely, and accurate data of the projects’ return- on-investment estimates. To ensure that all relevant infrastructure officials receive pertinent corrosion information, we recommend that the Secretaries of the Army, Navy, and Air Force departments direct their assistant secretaries responsible for acquisitions, technology and logistics to require the military departments’ Corrosion Control and Prevention Executives—in coordination with their installation management commands and in consultation with the Office of Corrosion Policy and Oversight—to develop a targeted communication strategy and an accompanying action plan for their departments to ensure the timely flow of key information to all relevant service officials, particularly to officials at the installation level, about corrosion-control activities and initiatives, such as training opportunities and outcomes of the infrastructure-related corrosion projects. We provided a draft of this report to DOD for comment. In its written comments, reprinted in appendix IV, DOD partially concurred with three of our recommendations and did not agree with two recommendations. DOD partially concurred with our first recommendation to take steps to enhance the tracking and reporting of its infrastructure-related corrosion projects. In its comments, DOD stated that it is developing a web-based tracking tool for the Corrosion Office, Corrosion Executives, and project managers to input and extract project-related data, and DOD expects the change to result in increased timeliness and standardization of project data to include revised reporting deadlines for final and follow-on reports. While this system may address our recommendation, DOD did not state when the new system would be available for use. In response to our fourth recommendation that DOD take action to ensure that its records reflect complete, timely and accurate data on the projects’ return on investment, DOD partially concurred with the recommendation and stated that this new web-based system would provide data including return-on- investment estimates, and would be accessible to other parties, including the Corrosion Office, Corrosion Executives and project managers. DOD did not agree with our second recommendation that the Corrosion Office use its existing authority to identify and implement possible options or incentives for addressing the various funding, personnel, and other reasons cited by project management offices for not meeting reporting milestones. In its written comments, DOD disagreed with our recommendation, but did not state what actions it would take to improve submission of completed reports from the military services that DOD’s strategic plan requires for infrastructure-related corrosion projects. DOD stated that prior positive incentives to complete project reports were largely ineffective. However, as our report states, there are examples of military departments responding to incentives, such as the Navy completing 11 of 12 return-on-investment reassessments after the Corrosion Office provided funding as an incentive. The reassessments are the main focus of follow-on reports. Also, DOD stated that its project management offices occasionally miss reporting milestones and generally have done an excellent job of meeting their reporting obligations. However, as our report clearly shows, the project management offices had not submitted 50 of the 80 required final reports (63 percent) and had not submitted 15 of the 41 required follow-on reports (37 percent) to the Corrosion Office. Without timely submission of reports, decision makers may be unaware of potentially useful technologies to address corrosion. We continue to believe that the Corrosion Office could use its existing authorities to identify and implement other incentives or methods to address reasons that project management offices cite for not meeting reporting milestones. DOD did not agree to our third recommendation to revise guidance to clearly define the role of Corrosion Executives to assist the Corrosion Office in holding departments’ project management offices accountable for submitting reports in accordance with DOD’s strategic plan. DOD stated that further guidance is not necessary as the requirements are clearly stated in the strategic plan. DOD also stated that Corrosion Executives are given the freedom to manage their programs in the most efficient and effective manner. However, DOD’s strategic plan and guidance do not define a role for the Corrosion Executives in assisting the Corrosion Office in the project reporting process. Our recommendation was intended to fortify the role of Corrosion Executives in ensuring that project management offices within the Corrosion Executives’ respective military departments submit project reports as required in the strategic plan. We continue to believe that the Corrosion Executives could provide the additional management oversight necessary to strengthen corrosion project reporting. DOD partially concurred with our last recommendation that the Secretaries of the Army, Navy and Air Force direct the relevant assistant secretaries to require the military departments’ Corrosion Executives—in coordination with their installation management commands and in consultation with the Corrosion Office—to develop a targeted communication strategy and an action plan for their departments to ensure the timely flow of key information to all relevant service officials about corrosion-control activities and initiatives, such as training opportunities and outcomes of infrastructure-related corrosion projects. DOD commented that information flow to installations follows the chain of command to ensure that appropriate individuals receive information necessary for successful mission completion. The department further stated that the Corrosion Office would ensure that training information and project outcomes would be available to all relevant officials via publication in appropriate media; also, DOD stated that during the next review cycle the Corrosion Office would evaluate the military department corrosion prevention and control strategic plans to determine if they adequately address the flow of information. However, we continue to believe that each military department should have a targeted communication strategy, developed in consultation with the Corrosion Office and coordinated with the installation management commands within the military departments, and that strategy should go beyond the publication of information in appropriate media and provide specific steps for communicating corrosion-control information to all relevant service corrosion officials. Such a strategy is important because, as our report states, we found that all corrosion officials within each military department, particularly at the installation level, were not receiving relevant corrosion prevention and control information. Also, we continue to believe that without a targeted communication strategy and action plan, Corrosion Executives cannot ensure that service managers of facilities and other infrastructure will have access to all information and resources for dealing with corrosion and are aware of the most effective and efficient methods for corrosion control. We are sending copies of this report to appropriate congressional committees and to the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; Director of the DOD Office of Corrosion Policy and Oversight, and the Director of the Office of Management and Budget. In addition, this letter will be made available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-7968 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this letter. GAO staff who made key contributions to this report are listed in appendix V. DOD’s Office of Corrosion Policy and Oversight (Corrosion Office) sponsors a series of studies to assess the cost of corrosion throughout the department, including three studies—two completed and one ongoing—to determine how much money DOD spends on corrosion activities for its facilities and other infrastructure. These studies are conducted by LMI using a method approved by the Corrosion Office’s Corrosion Prevention and Control Integrated Product Team. According to LMI, its estimation methodology includes construction costs and actual maintenance expenditures for sustainment, restoration, and modernization that are known or can be identified, and focuses on tangible, direct material and labor costs as well as some indirect costs, such as research and development and training. In its studies, LMI noted that although corrosion maintenance costs are a subset of sustainment, restoration, and modernization costs, the tools and analysis methods used by planners to estimate sustainment, restoration, and modernization requirements do not specifically identify corrosion. In its first report in May 2007, LMI determined that spending on corrosion prevention and control for DOD facilities and other infrastructure for fiscal year 2005 was $1.8 billion. In its second report in July 2010 report, LMI found that spending on corrosion at DOD facilities and other infrastructure decreased from $1.8 billion to $1.6 billion between fiscal years 2005 and 2007, and then increased to $1.9 billion in fiscal year 2008. In its 2010 report, LMI reported that spending on corrosion as a percent of spending on maintenance dropped from 15.1 percent in fiscal year 2005 to 10.7 percent in fiscal year 2007 and increased to 11.7 percent in fiscal year 2008. Further, in its 2010 report, LMI reported that DOD spent more on corrosion-related maintenance for facilities and other infrastructure than it did on corrosion work related to military construction. Specifically, LMI reported that spending on corrosion for maintenance is three to four times higher than corrosion spending associated with construction, even though overall construction expenditures were nearly double that of overall maintenance expenditures. LMI provided two main reasons for this occurrence: (1) corrosion is rarely identified as a justification for the construction of a new facility; and (2) if estimated construction costs need to be reduced to obtain funding of the project, measures to prevent corrosion are among the first costs to be removed from the estimated costs. Additionally, LMI reported that DOD spent almost twice as much on corrective measures to address corrosion ($1,263 million) as it did on preventive measures to avoid corrosion ($640 million). LMI’s 2010 report shows installations’ estimated expenditures in eight categories of corrective and preventive maintenance for facilities and other infrastructure. Table 4 shows information from that 2010 report about the estimated expenditures for facilities and other infrastructure by maintenance category for fiscal year 2008. In August 2012, LMI began its third study of the cost of corrosion at DOD facilities and other infrastructure to analyze corrosion-related spending for fiscal year 2009 through fiscal year 2011. For this report, the LMI official told us that the methodology for classifying the environmental conditions of the installations that are included in their cost-of-corrosion studies would be the major difference between the 2012 assessment and the prior reports. LMI also acknowledged that there are some challenges and limitations to the methodology and data used in its analysis. These challenges and limitations include, but are not limited to: (1) limited quality controls in the services’ facilities and other infrastructure work order data in which approximately 25 percent of the records obtained from the military services’ maintenance systems could not be used due to missing key data elements that could not be recreated; (2) the lack of tracking and maintaining of asset availability data for facilities and other infrastructure; and (3) the three-year period between the cost of corrosion studies, which means there will be a significant period before data can be updated. To address our first objective to determine the extent that project managers submitted required reports to the DOD Office of Corrosion Policy and Oversight (hereafter referred to as Corrosion Office), we reviewed the February 2011 DOD Corrosion Prevention and Mitigation Strategic Plan and used the reporting milestones outlined in the plan to identify types of reports required for each project. We obtained project information for the 80 infrastructure-related corrosion demonstration projects funded by the Corrosion Office for fiscal years 2005 through 2010. We requested and reviewed the project documentation— project plans, bimonthly or quarterly reports, final reports and follow-on reports— to determine if the data and related reports met the Corrosion Office’s reporting requirements. We reviewed the corrosion project documentation for these projects for missing data, outliers, or other errors, and documented where we found incomplete data and computation errors. For the purposes of our work in reviewing projects funded in fiscal years 2005 through 2010, we considered a final report to be submitted as required if the Corrosion Office had a copy of the report in its records system. We did not consider the timeliness of the submitted reports. Additionally, for follow-on reports, we could assess only the projects funded in fiscal years 2005 through 2007 (41 of the 80 projects) because the DOD strategic plan’s milestone requires that follow-on reports be submitted for completed projects within two years after the projects have been completed and transitioned to use within the military departments. For completed projects, we documented the initial return-on-investment estimates shown in the project plans and the resulting change, if any, shown in the follow-on reports. We determined that the project reporting data was sufficiently reliable for the purposes of determining the extent to which the military departments met the Corrosion Office’s reporting requirements, but we did not determine the timeliness of the report or assess elements of the actual report. After identifying the projects that required further review because the project managers had not completed and submitted the required reports, we interviewed and obtained documentation from the Corrosion Office, the military departments’ Corrosion Control and Prevention Executives (hereafter referred to as Corrosion Executives) and the respective project managers to determine why the required reports were not submitted at the prescribed deadlines. Also, we determined what actions, if any, they planned to take to complete the reports. Specifically, to complete this task, we met with corrosion-control officials from the following organizations: the Corrosion Office, Army Corrosion Executive, Navy Corrosion Executive, Air Force Corrosion Executive, Army Engineering Research Development Center, Construction Engineering Research Laboratory, Naval Facilities Engineering Command (Engineering Support Center), Office of the Air Force Logistics, Installations and Mission Support, and the Air Force Civil Engineer Support Agency. We also reviewed prior GAO work on DOD’s corrosion prevention and control program. To address our second objective to assess the extent to which the return- on-investment data submitted by the military departments is accurately reflected in records maintained by the Corrosion Office, for the 105 infrastructure-related corrosion demonstration projects funded from fiscal years 2005 through 2012, we reviewed the return-on-investment estimates found in the project plans and the return-on-investment estimates maintained in the Corrosion Office’s records. We then compared the data from these two sources to determine if any differences existed in the estimated return on investment. Upon completion of this comparison, we provided a list of projects with discrepancies in the estimated return on investment to the Corrosion Office and asked the officials to explain why the inconsistencies existed and requested that they provide additional information to reconcile the differences in the two estimates. Further, we compared the return-on-investment estimates maintained in the Corrosion Office’s records for projects funded in fiscal years 2005 through 2007 with the return-on-investment estimates contained in the military departments’ follow-on reports to check for any differences between the two sets of records. To address our third objective to assess the extent to which DOD’s corrosion-control officials have fully informed all relevant officials within each department about efforts to prevent and mitigate corrosion of facilities and other infrastructure, we reviewed relevant legislation and guidance, DOD policies and publications, and the DOD and the military departments’ strategic plans to obtain information on the management of DOD’s corrosion prevention and control program. In addition, we interviewed officials at all levels within DOD—Corrosion Office officials, Corrosion Executives, the military services’ management commands for installations, and facility and infrastructure managers within the services—to discuss their corrosion prevention and control efforts, including challenges and successes in implementing new corrosion technologies. We interviewed officials across each of the military services and reviewed relevant service documentation to gather information about corrosion prevention and control programs within the services. We spoke with each military department’s designated Corrosion Executive as well as officials in the Corrosion Executives’ offices to discuss corrosion control and prevention activities for facilities and other infrastructure across the departments. We also interviewed officials from the installation management commands of the Army, Navy and Marine Corps, including the Office of the Army Chief of Staff for Installations Management, the U.S. Army Installations Management Command, the Commander of Navy Installations Command, and the U. S. Marine Corps Installations Command. We also interviewed officials within the civil engineering or facilities branches at two Air Force major commands—Air Mobility Command and Air Combat Command. We reviewed relevant service documentation, including each military department’s strategic plan for corrosion control and prevention, to identify efforts related to facilities and other infrastructure. During our review, we also met with the manager of a Defense Logistics Agency program for cathodic protection and corrosion control of submerged or underground steel structures. Other defense agencies were not evaluated as part of our work. We used data obtained from Office of the Deputy Under Secretary of Defense for Installations and Environment to identify all DOD facilities and other infrastructure by military service and geographic location. Using a nonprobability sample, we limited the installations for selection to those active-duty installations with at least 25 buildings owned by the federal government and ensured that a range of locations were selected from each of the four military services and across geographic regions of the United States. From the 390 installations that met these criteria, we some joint selected installations with different environmental conditions,military installations, and installations that did or did not participate in the Corrosion Office’s corrosion-technology demonstration projects. We determined that the data used to select the installations for our review were sufficiently reliable for the purposes of selecting our nonprobability sample. From April to October 2012, we conducted semistructured interviews with management officials of facilities and other infrastructure from 32 DOD installations to gather information and views about their corrosion control and management efforts. Figure 3 identifies the 32 selected installations where GAO interviewed officials for this review. The purpose of the semistructured interviews was to understand how the installation officials (1) use policies, plans, and procedures to identify and address corrosion; (2) address corrosion prevention and mitigation; (3) determine their maintenance and sustainment priorities; and (4) receive and disseminate information on relevant corrosion topics. We visited and interviewed and conducted audio officials at 6 of the 32 military installations,conference calls with officials at 26 of the 32 military installations. Although our findings from the interviews of officials of the 32 installations are not generalizable to the entire universe of installations, we feel our findings provide a range of issues related to corrosion that are experienced at installations. We conducted this performance audit from November 2011 to May 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As defined by DOD for its corrosion technology demonstration projects, the estimated return on investment is the ratio of the present value of benefits to the present value of the project’s total cost. In our December 2010 report, we recommended that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology and Logistics), in coordination with the Corrosion Executives, develop and implement a plan to ensure that return-on-investment reassessments are completed as scheduled.information on the timeframe and source of funding required to complete the return-on-investment reassessments. DOD concurred with our recommendation and stated that plans were underway to address this requirement. As of July 2012, DOD had not developed or implemented a formal plan that addresses this requirement. Specifically, we recommended that this plan include During our review, we found that the Corrosion Office required project managers of 41 projects to submit follow-on reports, and reports were completed and submitted for 25 of the 41 projects funded for fiscal years 2005 through 2007. Of the 25 follow-on reports, 23 contained return-on- investment estimates. We found that although follow-on reports were completed and submitted for the remaining 2 projects, return-on- investment estimates were not calculated for the projects because the respective Army and Navy reports noted that such a calculation was not required. For the 23 projects that have completed and submitted the required follow-on reports, Table 5 provides the military departments’ return-on-investment estimates included in the original project plans and the resulting change, if any, included in the follow-on reports. We did not review the military departments’ calculations or their methods for estimating the cost and benefits of the estimated returns on investment. In addition to the contact name above, the following staff members made key contributions to this report: Mark J. Wielgoszynski, Assistant Director; Rebekah Boone; Randolfo DeLeon; Jacqueline McColl; Charles Perdue; Carol Petersen; Richard Powelson; Terry Richardson; Amie Steele and Michael Willems. Defense Management: Additional Information Needed to Improve Military Departments’ Corrosion Prevention Strategies. GAO-13-379. Washington, D.C.: May 16, 2013. Defense Management: The Department of Defense’s Annual Corrosion Budget Report Does Not Include Some Required Information. GAO-12-823R. Washington, D.C.: September 10, 2012. Defense Management: The Department of Defense’s Fiscal Year 2012 Corrosion Prevention and Control Budget Request. GAO-11-490R. Washington, D.C.: April 13, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Defense Management: DOD Needs to Monitor and Assess Corrective Actions Resulting from Its Corrosion Study of the F-35 Joint Strike Fighter. GAO-11-171R. Washington, D.C.: December 16, 2010. Defense Management: DOD Has a Rigorous Process to Select Corrosion Prevention Projects, but Would Benefit from Clearer Guidance and Validation of Returns on Investment. GAO-11-84. Washington, D.C.: December 8, 2010. Defense Management: Observations on Department of Defense and Military Service Fiscal Year 2011 Requirements for Corrosion Prevention and Control. GAO-10-608R. Washington, D.C.: April 15, 2010. Defense Management: Observations on the Department of Defense’s Fiscal Year 2011 Budget Request for Corrosion Prevention and Control. GAO-10-607R. Washington, D.C.: April 15, 2010. Defense Management: Observations on DOD’s Fiscal Year 2010 Budget Request for Corrosion Prevention and Control. GAO-09-732R. Washington, D.C.: June 1, 2009. Defense Management: Observations on DOD’s Analysis of Options for Improving Corrosion Prevention and Control through Earlier Planning in the Requirements and Acquisition Processes. GAO-09-694R. Washington, D.C.: May 29, 2009. Defense Management: Observations on DOD’s FY 2009 Budget Request for Corrosion Prevention and Control. GAO-08-663R. Washington, D.C.: April 15, 2008. Defense Management: High-Level Leadership Commitment and Actions Are Needed to Address Corrosion Issues. GAO-07-618. Washington, D.C.: April 30, 2007. Defense Management: Additional Measures to Reduce Corrosion of Prepositioned Military Assets Could Achieve Cost Savings. GAO-06-709. Washington, D.C.: June 14, 2006. Defense Management: Opportunities Exist to Improve Implementation of DOD’s Long-Term Corrosion Strategy. GAO-04-640. Washington, D.C.: June 23, 2004. Defense Management: Opportunities to Reduce Corrosion Costs and Increase Readiness. GAO-03-753. Washington, D.C.: July 7, 2003. Defense Infrastructure: Changes in Funding Priorities and Strategic Planning Needed to Improve the Condition of Military Facilities. GAO-03-274. Washington, D.C.: February 19, 2003. | According to DOD, corrosion can significantly affect the cost of facility maintenance and the expected service life of DOD facilities. While corrosion is not always highly visible, it can lead to structural failure, loss of capital investment, and environmental damage. In response to a congressional request, GAO reviewed DODs corrosion prevention and control program for facilities and infrastructure. In this report, GAO assessed the extent that DOD (1) met reporting requirements, (2) maintained accurate return-on-investment data in its records, and (3) fully informed relevant officials of its corrosion-control efforts. GAO reviewed DOD policies and plans, met with corrosion-control officials, and visited and interviewed officials at 32 installations. The Department of Defense (DOD) has invested more than $68 million in 80 projects in fiscal years 2005 through 2010 to demonstrate new technology addressing infrastructure-related corrosion, but project managers have not submitted all required reports on the results of these efforts to the Corrosion Policy and Oversight Office (Corrosion Office). The DOD Corrosion Prevention and Mitigation Strategic Plan requires project managers to submit a final report when a project is complete, and submit a follow-on report within two years after the military department implements the technology. As of November 2012, GAO found that project managers had not submitted final reports for 50 of the 80 projects (63 percent) funded in fiscal years 2005 through 2010. Also, project managers had not submitted follow-on reports for 15 of the 41 projects (37 percent) funded in fiscal years 2005 through 2007. GAO found that the Corrosion Offices tracking system lacks key information to help ensure that project managers meet reporting requirements. Furthermore, the Corrosion Office is not fully exercising its authority to identify and implement options or incentives to address funding and other reasons given for not meeting reporting milestones. Also, GAO found inconsistency among the military departments Corrosion Control and Prevention Executives (Corrosion Executives) in holding project managers accountable for submitting the required reports. Without effective actions to ensure timely submission of final and follow-on reports, decision makers may be unaware of potentially useful technologies to address corrosion. The Corrosion Office maintains records on its infrastructure-related corrosion projects, including initial and reassessed return-on-investment estimates, for internal and external reporting, such as in DODs annual budget report to Congress. GAO found that the Corrosion Offices records showed updates to the initial estimates for the proposed projects, but the office has not consistently updated its records to show the reassessed estimates included in the follow-on reports. Specifically, GAO found that the Corrosion Office did not update data in its records for 5 of 25 projects (20 percent) with completed follow-on reports. Federal internal control standards require agencies to use internal controls to provide assurance that they have reliable financial and other reports for internal and external use. Without accurate and timely data, Congress and DOD managers may not have reliable information on the estimated return on investment as they oversee corrosion projects. DODs Corrosion Executives use mechanisms, such as briefings and site visits, to collect and disseminate information on corrosion-control activities in their departments; however, GAO found that slightly more than half of public works officials interviewed at 32 installations were unaware of DODs corrosion-related offices and resources. Under federal statute, Corrosion Executives are tasked with coordinating corrosion activities in their departments. GAO found that many relevant service officials interviewed did not receive key corrosion-control information because their Corrosion Executives do not have targeted communication strategies and accompanying action plans. Without a strategy and action plan, managers of facilities and infrastructure may not have access to all available information on efficient methods for corrosion prevention and control. GAO recommends five actions to improve DODs project reporting and tracking, the accuracy of its return-on-investment data, and its communication with stakeholders on corrosion-control activities for facilities and other infrastructure. DOD partially concurred with three recommendations and did not agree with two. DOD plans to implement a web-based tracking tool to improve data timeliness and standardization, among other actions. GAO continues to believe that its recommendations to improve project reporting are warranted, that the Corrosion Office should use its existing authorities to identify and implement other incentives for project managers to meet reporting milestones and that DOD should revise its guidance so that Corrosion Executives would assist with the oversight of project reporting. |
The federal government spent more than $90 billion on domestic food and nutrition assistance programs in fiscal year 2010. This assistance is provided through a decentralized system of primarily 18 different federal programs that help ensure that millions of low-income individuals have consistent, dependable access to enough food for an active, healthy life. The Departments of Agriculture (USDA), Health and Human Services (HHS), and Homeland Security as well as multiple state and local government and nonprofit organizations work together to administer a complex network of programs and providers, ranging from agricultural commodities to prepared meals to vouchers or other targeted benefits used in commercial food retail locations. However, some of these programs provide comparable benefits to similar or overlapping populations. For example, individuals eligible for groceries through USDA’s Commodity Supplemental Food Program are also generally eligible for groceries through USDA’s Emergency Food Assistance Program and for targeted benefits that are redeemed in authorized stores through the largest program, the Supplemental Nutrition Assistance Program (formerly known as the Food Stamp Program), which is also administered by USDA. The availability of multiple programs with similar benefits helps ensure that those in need have access to nutritious food, but can also increase administrative costs, which account for approximately a tenth to more than a quarter of total costs among the largest of these programs. Administrative inefficiencies can also result from program rules related to determining eligibility, which often require the collection of similar information by multiple entities. For example, six USDA programs—the National School Lunch Program, the School Breakfast Program, the Fresh Fruit and Vegetable Program, the Summer Food Service Program, the Special Milk Program, and the Child and Adult Care Food Program—all provide food to eligible children in settings outside the home, such as at school, day care, or summer day camps. Most of the 18 programs have specific and often complex legal requirements and administrative procedures that federal, state, and local organizations follow to help manage each program’s resources. According to previous GAO work and state and local officials, rules that govern these and other nutrition assistance programs often require applicants who seek assistance from multiple programs to submit separate applications for each program and provide similar information verifying, for example, household income. This can create unnecessary work for both providers and applicants and may result in the use of more administrative resources than needed. One of the possible methods for reducing program overlap and inefficiencies would entail USDA broadening its efforts to simplify, streamline, or better align eligibility procedures and criteria across programs to the extent that it is permitted by law. USDA recently stated that on an ongoing basis, the agency will continue efforts to promote policy and operational changes that streamline the application and certification process; enforce rules that prevent simultaneous participation in programs with similar benefits or target audiences; and review and monitor program operations to minimize waste and error. While options such as consolidating or eliminating overlapping programs also have the potential to reduce administrative costs, they may not reduce spending on benefits unless fewer individuals are served as a result. In addition to challenges resulting from overlap, not enough is known about the effectiveness of many of the domestic food assistance programs. USDA tracks performance measures related to its food assistance programs such as the number of people served by a program. However, these performance measures are insufficient for determining a program’s effectiveness. Additional research that GAO consulted suggests that participation in 7 USDA programs—including the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), the National School Lunch Program, the School Breakfast Program, and the Supplemental Nutrition Assistance Program—is associated with positive health and nutrition outcomes consistent with programs’ goals, such as raising the level of nutrition among low-income households, safeguarding the health and well-being of the nation’s children, and strengthening the agricultural economy. Yet little is known about the effectiveness of the remaining 11 programs because they have not been well studied. GAO has suggested that USDA consider which of the lesser-studied programs need further research, and USDA agreed to consider the value of examining potential inefficiencies and overlap among smaller programs. Federally funded employment and training programs play an important role in helping job seekers obtain employment. In fiscal year 2009, 47 programs spent about $18 billion to provide services, such as job search and job counseling, to program participants. Most of these programs are administered by the Departments of Labor, Education, and HHS. However, 44 of the 47 federal employment and training programs GAO identified, including those with broader missions such as multipurpose block grants, overlap with at least one other program in that they provide at least one similar service to a similar population. Some of these overlapping programs serve multiple population groups. Others target specific populations, most commonly Native Americans, veterans, and youth. In some cases, these programs may have meaningful differences in their eligibility criteria or objectives, or they may provide similar types of services in different ways. GAO examined potential duplication among three selected large programs that provide employment and training services—the Temporary Assistance for Needy Families, Employment Service, and Workforce Investment Act Adult programs. These programs maintain parallel administrative structures to provide some of the same services, such as job search assistance to low-income individuals (see fig. 1). At the state level, the state human services or welfare agency typically administers Temporary Assistance for Needy Families, while the state workforce agency administers Employment Service and Workforce Investment Act Adult programs through one-stop centers. In one-stop centers, Employment Service staff provide job search and other services to Employment Service customers, while Workforce Investment Act staff provide job search and other services to Workforce Investment Act Adult customers. Agency officials acknowledged that greater efficiencies could be achieved in delivering services through these programs, but said various factors could warrant having multiple entities provide the same services, including the number of clients that any one-stop center can serve and one-stop centers’ proximity to clients, particularly in rural areas. Colocating services and consolidating administrative structures may increase efficiencies and reduce costs, but implementation can be challenging. Some states have colocated Temporary Assistance for Needy Families employment and training services in one-stop centers where Employment Service and Workforce Investment Act Adult services are provided. Three states—Florida, Texas, and Utah—have gone a step further by consolidating the agencies that administer these programs, and state officials said this has reduced costs and improved services, but they could not provide a dollar figure for cost savings. States and localities may face challenges to colocating services, such as limited office space. In addition, consolidating administrative structures may be time consuming and any cost savings may not be immediately realized. An obstacle to further progress in achieving greater administrative efficiencies across federal employment and training programs is that limited information is available about the strategies and results of such initiatives. In addition, little is known about the incentives that states and localities have to undertake such initiatives and whether additional incentives are needed. To facilitate further progress by states and localities in increasing administrative efficiencies in employment and training programs, GAO recommended in 2011 that the Secretaries of Labor and HHS work together to develop and disseminate information that could inform such efforts. This should include information about state initiatives to consolidate program administrative structures and state and local efforts to colocate new partners, such as Temporary Assistance for Needy Families, at one-stop centers. Information on these topics could address challenges faced, strategies employed, results achieved, and remaining issues. As part of this effort, Labor and HHS should examine the incentives for states and localities to undertake such initiatives, and, as warranted, identify options for increasing such incentives. Labor and HHS agreed they should develop and disseminate this information. HHS noted that it lacks legal authority to mandate increased Temporary Assistance for Needy Families – Workforce Investment Act coordination or create incentives for such efforts. In terms of achieving efficiencies through program consolidation, the Administration’s budget request for fiscal year 2012 proposes consolidating nine programs into three as part of its proposed changes to the Workforce Investment Act. The Administration also proposed consolidating Education’s Career and Technical Education – Basic Grants to States and Tech Prep Education programs, at the same time reducing program funding. In addition, to improve coordination among similar programs, the budget proposal would transfer the Senior Community Service Employment Program from Labor to HHS. Consolidating or colocating employment and training programs is further complicated by the lack of comprehensive information on the results of these programs. For example, nearly all 47 programs GAO identified track multiple outcomes measures, but only 5 programs have completed an impact study since 2004 to assess whether outcomes resulted from the program and not some other cause. Based on our survey of agency officials, we determined that only 5 of the 47 programs have had impact studies that assess whether the program is responsible for improved employment outcomes. The five impact studies generally found that the effects of participation were not consistent across programs, with only some demonstrating positive impacts that tended to be small, inconclusive, or restricted to short-term impacts. Officials from the remaining 42 programs cited other types of studies or no studies at all. And among the three programs GAO reviewed for potential duplication—the Temporary Assistance for Needy Families, Employment Service, and Workforce Investment Act Adult—the extent to which individuals receive the same services from these programs is unknown due to limited data. Several federal agencies provide a range of programs that offer not only housing assistance but also supportive services to those experiencing homelessness and to those at risk of becoming homeless, yet coordination of these programs varies by program and agency. We previously reported that in 2009, federal agencies spent about $2.9 billion on over 20 programs targeted to address the various needs of persons experiencing homelessness. A number of federal programs are specifically targeted to address issues related to homelessness while other mainstream programs that are generally designed to help low-income individuals by providing housing assistance and services such as health care, job training, and food assistance may also serve those experiencing homelessness or at risk of becoming homeless. We found the potential for overlap because in some cases, different agencies may be offering similar types of services to similar populations. For example, we reported in July 2010 that at least seven federal agencies administered programs that provide some type of shelter or housing assistance to persons experiencing homelessness. Similarly, five agencies administered programs that deliver food and nutrition services, and four agencies administered programs that provide health services including mental health services and substance abuse treatment. In addition to similar services, this range of programs has resulted in a fragmented service system. Overlap and fragmentation in some of these programs may be due in part to their legislative creation as separate programs under the jurisdiction of several agencies. Moreover, additional programs have since developed incrementally over time to address the specific needs of certain segments of the population. Nevertheless, this fragmentation can create difficulties for people in accessing services as well as administrative burdens for providers who must navigate various application requirements, selection criteria, and reporting requirements. For example, as we reported in July 2010, providers in rural areas told us they have limited resources and therefore must apply to and assemble multiple funding sources from both state and federal programs. As a result, the time consumed in grant writing and meeting the various compliance and review requirements set by statute represented an administrative and workload burden, according to these providers. Coordination of targeted homelessness programs with other mainstream programs that support individuals or families experiencing homelessness includes agencies working together on program guidance and prevention strategies. In July 2010, GAO reported that agencies had taken some steps toward improved coordination. For instance, the U.S. Interagency Council on Homelessness (USICH) has provided a renewed focus on such coordination and has developed a strategic plan for federal agencies to end homelessness. However, the lack of federal coordination was still viewed by some local service providers as an important barrier to the effective delivery of services to those experiencing homelessness. Without more formal coordination of federal programs to specifically include the linking of supportive services and housing, federal efforts to address homelessness may remain fragmented and not be as effective as they could be. In June 2010, GAO recommended that the Departments of Education, HHS, and Housing and Urban Development develop a common vocabulary to facilitate federal efforts to determine the extent and nature of homelessness and develop effective programs to address homelessness. We also recommended in July 2010 that HHS and Housing and Urban Development consider more formally linking their housing and supportive services programs. Fragmentation of programs across federal agencies has also resulted in differing methods for collecting data on those experiencing homelessness. In part because of the lack of comprehensive data collection requirements, the data have limited usefulness. Complete and accurate data are essential for understanding and meeting the needs of those who are experiencing homelessness and preventing homelessness from occurring. USICH has made the development of a common data standard for federal homelessness programs a priority. USICH recognizes that collection, analysis, and reporting of quality, timely data on homelessness are essential for targeting interventions, tracking results, strategic planning, and resource allocation. Currently each federal program noted above generally has distinct and different data requirements. USICH acknowledges that a common data standard and uniform performance measures across all federal programs that are targeted at homelessness would facilitate greater understanding and simplify local data management. USICH representatives noted that agencies are taking steps to improve and coordinate data collection and reporting, specifically citing the December 2010 announcement by the Department of Veterans Affairs of its plan to utilize the Homeless Information Management System over the next 12 months. Federal agencies fund transportation services to millions of Americans who are unable to provide their own transportation—frequently because they are elderly, have disabilities, or have low incomes—through programs that provide similar services to similar client groups. The variety of federal programs providing funding for transportation services to the transportation disadvantaged has resulted in fragmented services that can be difficult for clients to navigate and narrowly focused programs that may result in service gaps. GAO previously identified 80 existing federal programs across eight departments that provided funding for transportation services for the transportation disadvantaged in fiscal year 2010 (see app. III). These programs may provide funding to service providers for bus tokens, transit passes, taxi vouchers, or mileage reimbursement, for example, to transportation-disadvantaged persons for trips to access government services (such as job-training programs), the grocery store, medical appointments, or for other purposes. For example, the Departments of Agriculture and Labor both provide funding for programs that could provide bus fare for low-income youths seeking employment or job training. Further, these services can be costly because of inconsistent, duplicative, and often restrictive program rules and regulations. For example, GAO has previously reported that a transportation provider in one state explained that complicated fee structures or paperwork requirements for services funded under different programs may result in overlapping service such as two vehicles on the same route at the same time. The Interagency Transportation Coordinating Council on Access and Mobility, a federal entity charged with promoting interagency coordination, has taken steps to encourage and facilitate coordination across agencies, but action by federal departments will be necessary to better coordinate and eliminate duplication and fragmentation. The Coordinating Council’s “United We Ride” initiative and the Federal Transit Administration (FTA) have also encouraged state and local coordination. However, there has been limited interagency coordination and direction at the federal level. Additionally, while certain FTA transit programs require that projects selected for grant funding be derived from locally developed, coordinated public transit, human service transportation plans, participation by non-FTA grantees—which is optional—has varied, limiting these efforts. As GAO and others have reported, improved coordination could not only help to reduce duplication and fragmentation at the federal level, but could also lead to economic benefits, such as funding flexibility, reduced costs or greater efficiency, and increased productivity, as well as improved customer service and enhanced mobility. A 2009 report by the National Resource Center for Human Service Transportation Coordination found that three federal departments providing transportation services—the Departments of Health and Human Services, Labor, and Education—had yet to coordinate their planning with the Department of Transportation (DOT). To reduce fragmentation and to realize these benefits, federal agencies on the Coordinating Council should identify and assess their transportation programs and related expenditures and work with other departments to identify potential opportunities for additional coordination. For example, neither the Coordinating Council nor most federal departments have an inventory of existing programs providing transportation services or their expenditures and they lack the information to identify opportunities to improve the efficiency and service of their programs through coordination. The Coordinating Council should develop the means for collecting and sharing this information. In 2003, GAO discussed three potential options to overcome obstacles to the coordination of transportation for the transportation disadvantaged, two of which would require substantial statutory or regulatory changes and include potential costs: making federal program standards more uniform or creating some type of requirement or financial incentive for coordination. We recommended expanding the Coordinating Council and better disseminating guidance. Subsequently, the Coordinating Council was expanded and several coordination initiatives were launched, and progress has been made in coordination efforts, particularly at the state and local levels. Furthermore, we reported in March 2011 that, to assure that coordination benefits are realized, Congress may want to consider requiring key programs to participate in coordinated planning. The Administration, DOT, transportation interest groups, and legislators have issued proposals to revise DOT programs in the next surface transportation reauthorization. For example, the President’s Budget Request for Fiscal Year 2012 proposes combining three FTA programs that provide services to transportation-disadvantaged populations—the Job Access and Reverse Commute program, the New Freedom program, and the Elderly Individuals and Individuals with Disabilities Program. In conclusion, as I have outlined in my testimony, opportunities exist to streamline and more efficiently carry out programs in the areas of domestic food assistance, employment and training, homelessness, and transportation for disadvantaged populations. Specifically, addressing duplication, overlap, and fragmentation in these areas could help to minimize the administrative burdens faced by those entities—including states and localities as well as nonprofits—that are delivering these programs’ services. Such administrative burdens range from eligibility requirements and the application process to costs associated with carrying out the program and reporting requirements. Improving consistency among these various requirements and processes as well as considering how multiple agencies could better coordinate their delivery of programs could result in benefits both for those providing and those receiving the services. We have previously reported on the challenges federal grantees face in navigating differences among programs across agencies. Additionally, reducing duplication might also help improve agencies’ ability to track and monitor their programs which, as described earlier, is needed to better assess coordination as well as performance. As we are completing our governmentwide examination on this topic, we will continue to look closely at these specific administrative burden and assessment issues. As the nation rises to meet the current fiscal challenges, we will continue to assist Congress and federal agencies in identifying actions needed to reduce duplication, overlap, and fragmentation; achieve cost savings; and enhance revenues. As part of current planning for our future annual reports, we are continuing to look at additional federal programs and activities to identify further instances of duplication, overlap, and fragmentation as well as other opportunities to reduce the cost of government operations and increase revenues to the government. We will be using an approach to ensure governmentwide coverage through our efforts by the time we issue our third report in fiscal year 2013. We plan to expand our work to more comprehensively examine areas where a mix of federal approaches is used, such as tax expenditures, direct spending, and federal loan programs. Likewise, we will continue to monitor developments in the areas we have already identified. Issues of duplication, overlap, and fragmentation will also be addressed in our routine audit work during the year as appropriate and summarized in our annual reports. Careful, thoughtful actions will be needed to address many of the issues discussed in our March report, particularly those involving potential duplication, overlap, and fragmentation among federal programs and activities. These are difficult issues to address because they may require agencies and Congress to re-examine within and across various mission areas the fundamental structure, operation, funding, and performance of a number of long-standing federal programs or activities with entrenched constituencies. Continued oversight by the Office of Management and Budget and Congress will be critical to ensuring that unnecessary duplication, overlap, and fragmentation are addressed. Thank you, Mr. Chairman, Ranking Member Kucinich, and Members of the Subcommittee. This concludes my prepared statement. I would be pleased to answer any questions you may have. For further information on this testimony or our March report, please contact Janet St. Laurent, Managing Director, Defense Capabilities and Management, who may be reached at (202) 512-4300, or [email protected]; and Katherine Siggerud, Managing Director, Physical Infrastructure, who may be reached at (202) 512-2834, or [email protected]. Specific questions about domestic food assistance as well as employment and training issues may be directed to Barbara Bovbjerg, Managing Director, Education, Workforce, and Income Security, who may be reached at (202) 512-7215, or [email protected]. Specific questions about homelessness issues may be directed to Orice Williams Brown, Managing Director, Financial Markets and Community Investment, who may be reached at (202) 512-5837, or [email protected]. Specific questions about transportation-disadvantaged issues may be directed to Katherine Siggerud. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. DOD and the Department of Veterans Affairs (VA) Two bureaus within the Department of State (State) Department of the Treasury’s (Treasury) Internal Revenue Service (IRS) Department of Health and Human Services’ Centers for Medicare & Medicaid Services (CMS) Department of Homeland Security (DHS) Transportation Security Administration (TSA) DHS’s Customs and Border Protection (CBP) The federal government spent more than $62.5 billion on the following 18 domestic food nutrition and assistance programs in fiscal year 2008. Table 2 lists selected federal programs that provide shelter or housing assistance. Forty-four of the 47 federal employment and training programs GAO identified (see table 3), including those with broader missions such as multipurpose block grants, overlap with at least one other program in that they provide at least one similar service to a similar population. However, our review of 3 of the largest programs showed that the extent to which individuals receive the same services from these programs is unknown due to program data limitations. This list contains programs that GAO identified as providing transportation services to transportation-disadvantaged persons, with limited information available on funding. Transportation is not the primary purpose of many of these programs, but rather access to services, such as medical appointments. In many cases, funding data were not available as funds are embedded in broader program spending. However, GAO obtained fiscal year 2009 funding information for 23 programs (see table 4), which spent an estimated total of $1.7 billion on transportation services that year. | This testimony discusses our first annual report to Congress responding to the statutory requirement that GAO identify federal programs, agencies, offices, and initiatives--either within departments or governmentwide--that have duplicative goals or activities. This work can help inform government policymakers as they address the rapidly building fiscal pressures facing our national government. Our simulations of the federal government's fiscal outlook show continually increasing levels of debt that are unsustainable over time, absent changes in the federal government's current fiscal policies. Since the end of the recent recession, the gross domestic product has grown slowly, and unemployment has remained at a high level. While the economy is still recovering and in need of careful attention, widespread agreement exists on the need to look not only at the near term but also at steps that begin to change the long-term fiscal path as soon as possible without slowing the recovery. With the passage of time, the window to address the fiscal challenge narrows and the magnitude of the required changes grows. This testimony today is based on our March 2011 report, which provided an overview of federal programs or functional areas where unnecessary duplication, overlap, or fragmentation exists and where there are other opportunities for potential cost savings or enhanced revenues. In that report, we identified 81 areas for consideration--34 areas of potential duplication, overlap, or fragmentation and 47 additional areas describing other opportunities for agencies or Congress to consider taking action that could either reduce the cost of government operations or enhance revenue collections for the Treasury. The 81 areas we identified span a range of federal government missions such as agriculture, defense, economic development, energy, general government, health, homeland security, international affairs, and social services. Within and across these missions, the report touches on hundreds of federal programs, affecting virtually all major federal departments and agencies. My testimony today highlights some key examples of overlap and duplication from our March report on the federal government's management of programs providing services in the areas of (1) domestic food assistance, (2) employment and training, (3) homelessness, and (4) transportation for disadvantaged populations. For each area, this statement will discuss some of the challenges related to overlap and duplication, as well as examples of how better information about each program could help policymakers in determining how to address this overlap and duplication. The federal government spent more than $90 billion on domestic food and nutrition assistance programs in fiscal year 2010. This assistance is provided through a decentralized system of primarily 18 different federal programs that help ensure that millions of low-income individuals have consistent, dependable access to enough food for an active, healthy life. The Departments of Agriculture (USDA), Health and Human Services (HHS), and Homeland Security as well as multiple state and local government and nonprofit organizations work together to administer a complex network of programs and providers, ranging from agricultural commodities to prepared meals to vouchers or other targeted benefits used in commercial food retail locations. However, some of these programs provide comparable benefits to similar or overlapping populations. For example, individuals eligible for groceries through USDA's Commodity Supplemental Food Program are also generally eligible for groceries through USDA's Emergency Food Assistance Program and for targeted benefits that are redeemed in authorized stores through the largest program, USDA's Supplemental Nutrition Assistance Program. Federally funded employment and training programs play an important role in helping job seekers obtain employment. In fiscal year 2009, 47 programs spent about $18 billion to provide services, such as job search and job counseling, to program participants. Most of these programs are administered by the Departments of Labor, Education, and HHS. However, 44 of the 47 federal employment and training programs GAO identified, including those with broader missions such as multipurpose block grants, overlap with at least one other program in that they provide at least one similar service to a similar population. In some cases, these programs may have meaningful differences in their eligibility criteria or objectives, or they may provide similar types of services in different ways. Several federal agencies provide a range of programs that offer not only housing assistance but also supportive services to those experiencing homelessness and to those at risk of becoming homeless, yet coordination of these programs varies by program and agency. We previously reported that in 2009, federal agencies spent about $2.9 billion on over 20 programs targeted to address the various needs of persons experiencing homelessness. A number of federal programs are specifically targeted to address issues related to homelessness while other mainstream programs that are generally designed to help low-income individuals by providing housing assistance and services such as health care, job training, and food assistance may also serve those experiencing homelessness or at risk of becoming homeless. We found the potential for overlap because in some cases, different agencies may be offering similar types of services to similar populations. Federal agencies fund transportation services to millions of Americans who are unable to provide their own transportation--frequently because they are elderly, have disabilities, or have low incomes--through programs that provide similar services to similar client groups. The variety of federal programs providing funding for transportation services to the transportation disadvantaged has resulted in fragmented services that can be difficult for clients to navigate and narrowly focused programs that may result in service gaps. GAO previously identified 80 existing federal programs across eight departments that provided funding for transportation services for the transportation disadvantaged in fiscal year 2010. These programs may provide funding to service providers for bus tokens, transit passes, taxi vouchers, or mileage reimbursement, for example, to transportation-disadvantaged persons for trips to access government services (such as job-training programs), the grocery store, medical appointments, or for other purposes. |
During the Cold War, the Soviet Union established several hundred research institutes that were dedicated to the research, development, and production of weapons of mass destruction. Although precise figures are not available, science center officials estimate that at the time of the Soviet Union’s collapse, from 30,000 to 75,000 highly trained senior weapons scientists worked at these institutes. These figures do not include the thousands of less experienced junior scientists and technicians who also worked in these institutes. After the collapse of the Soviet Union in 1991, many of these scientists suffered significant cuts in pay and lost their government-supported work. By early 1992, the United States and other countries were concerned that senior weapons scientists struggling to support their families could be tempted to sell their expertise to terrorists or countries of concern such as Iraq, Iran, and North Korea. To address this threat, the United States, the European Union, Japan, and Russia signed an agreement in 1992 establishing the International Science and Technology Center in Moscow. A year later, the United States, Sweden, Canada, and Ukraine signed an agreement establishing the Science and Technology Center in Ukraine, located in the city of Kiev. The science centers in Russia and Ukraine began funding research projects in 1994 and 1995, respectively. In addition, the science centers have recently begun supporting the weapons scientists’ long-term transition to peaceful research by helping them identify and develop the commercial potential of their research, providing some business training, and helping fund patent applications. While the science centers operate independently of each other, they are very similar in structure and procedures (see fig. 1). Each science center has a governing board that meets two or three times a year to make administrative decisions, which includes formally approving project funding. Each science center also has an executive director and secretariat that carries out these decisions by conducting the center’s day-to-day operations and administering the funded projects. The science centers’ senior management consists mostly of representatives from the United States and the other funding parties (the European Union, Japan, and Canada). However, almost all of the secretariat’s staff who are responsible for project implementation and oversight are Russian and Ukrainian nationals hired by the funding parties and the host government of Russia or Ukraine. As of December 31, 2000, the United States had funded 590 projects conducted at 431 research institutes, mostly within Russia and Ukraine, but also in Armenia, Georgia, Kazakhstan, Uzbekistan, and the Kyrgyz Republic. The projects range in length from 6 months to more than 3 years and involve basic and applied research in such areas as developing anticancer drugs, devising techniques to enhance environmental cleanup, and ensuring nuclear reactor safety. The projects employ teams of senior weapons scientists, junior scientists, and technicians according to the detailed workplans included in the project agreements. They receive cash payments for their work that are sent directly from the science centers to their personal bank accounts. According to science center officials, the average daily grant payment for senior weapons scientists is $20-$22 per day, tax free, compared to an average daily wage for all workers of about $4 in Russia or about $2 in Ukraine. While most of a project’s funds are spent for the scientists’ and technicians’ salaries, the United States also pays for other costs associated with the project, as specified in the project agreement. These costs usually include the purchasing of computer equipment and some laboratory equipment, such as chemicals and glassware. In addition, the United States pays for senior scientists’ travel to international conferences so that they can present their work and meet with their western counterparts. Also, the institutes receive payment for overhead costs, such as electricity and heat (not to exceed 10 percent of the project’s total cost). As table 1 shows, the United States has provided more funds for projects at both centers than any other source. Since 1994, $227 million has been appropriated specifically for the science center program, of which $133.9 million had been used to fund approved projects as of March 31, 2001. In addition, U.S. agencies such as the Departments of Defense, Agriculture, Energy, and Health and Human Services have used $25.4 million in funds from other appropriations to support projects through the science center program. Finally, private sector firms from the United States, the European Union, Japan, and Canada have funded projects of commercial interest to them that they helped develop with senior weapons scientists. As figures 2 and 3 show, the United States has provided about 45 percent of the funding for projects at the science center in Russia and about 72 percent of the funding for projects at the science center in Ukraine since 1994. In addition to the science center program, the Department of Energy (DOE) funds research by weapons scientists through two similar programs. As of December 2000, DOE had obligated about $110 million for the Initiatives for Proliferation Prevention program and about $16 million for the Nuclear Cities Initiative. Like the science centers program, Initiatives for Proliferation Prevention pays scientists directly for peaceful research in several countries of the former Soviet Union, particularly nuclear weapons scientists. However, the program is also designed to commercialize technologies that utilize the scientists’ expertise. The objectives of the Nuclear Cities Initiative are to create nonmilitary job opportunities for weapons scientists in Russia’s closed nuclear cities and to help Russia accelerate the downsizing of its nuclear weapons complex. Unlike the science center program, the Nuclear Cities Initiative does not pay scientists directly. One mechanism the State Department uses to meet the program’s nonproliferation objectives is its leading role in selecting which projects will receive funding. The project selection process begins after the science centers send the proposals they receive from scientists to the State Department for review. An interagency process involving the Departments of State, Defense, and Energy reviews about 1,000 project proposals during the course of a year for scientific merit and potential policy and proliferation concerns. The State Department’s selection is limited to those projects approved by the national government where the scientists work and, in some instances, the State Department has not been granted access to scientists at critical biological research institutes. Since 1994, the State Department has selected for funding 590 projects that employed about 9,700 senior scientists. However, the State Department does not know how much of the total population of senior scientists it has reached because estimates of the total number of scientists vary widely. The State Department’s selection process begins when scientists submit project proposals through their research institutes to their government for approval and certification of the senior weapons scientists’ expertise. The State Department selects from those project proposals that have been approved by the national government where the scientists work. Although State Department and science center officials stated that most project proposals were approved by the national governments, not all research institutes in the former Soviet Union have had scientists put forth a project proposal to one of the science centers. For example, four biological weapons institutes under the Russian Ministry of Defense have not submitted project proposals to the science center in Russia. This effectively denies the State Department access to the senior scientists at these institutes, an issue of potential concern, since Russia’s intentions regarding its inherited biological weapons capability remain unclear. Project proposals approved by their government are then sent to one of the science center secretariats to be forwarded to the United States for review. The other funding parties also receive project proposals from the science centers and conduct their own, independent selection process. After project proposals arrive from the science centers, the State Department distributes them to the various participants in the interagency review process, including the Departments of Defense and Energy, and U.S. scientists from private companies and universities. As shown in figure 4, projects undergo a variety of reviews to ensure that the State Department funds projects that meet nonproliferation objectives and program intent. The State Department chairs an interagency group, including the Departments of Defense and Energy, that conducts a policy review of all project proposals. According to State Department officials, this interagency policy review group assesses whether the proposed project contains elements that contradict U.S. policy, such as work being conducted with institutes in Belarus (where there are human rights concerns) or with institutes that are working with Iranian scientists in areas of proliferation concern. The policy group also coordinates the project proposals with other U.S. government programs that may involve the same institute or scientists. This process relies on the reviewers’ knowledge and experience with specific institutes and scientists and their expertise on policy issues. According to State Department officials, weapons scientists submit few proposals that are contrary to U.S. policy. State Department officials and science advisers from the U.S. national laboratories and other scientists also review the proposals for scientific merit to ensure that projects employ mostly senior scientists carrying out meaningful work. The science advisers forward proposals to two or three other U.S. scientists who specialize in the proposed area of work to obtain their views on the scientific implications of the work, including what they know about the scientists who submitted the proposal. Based on this review and their own experience, the advisers develop a consensus opinion on the merits of the proposed work and whether the United States should fund it. The interagency group recommends rejecting projects where less than half of the scientists are former senior weapons scientists. According to State Department officials, the Department focuses its funding efforts on projects where the majority of participants are senior scientists whose expertise represents a more significant proliferation threat than junior scientists or technicians. However, the State Department cannot independently verify the weapons experience of the senior scientists it has employed. The State Department relies on the scientists’ national governments to certify that the senior weapons scientists listed as participants in a project proposal actually have sufficient expertise to pose a proliferation risk. According to State Department officials, the group also considers the commercialization potential of the proposals as part of the review process. According to State Department and science center officials, although commercialization is not a primary goal, their ability to promote the sustainability of the program through the commercial application of scientific research is limited by the inherent challenges of finding commercial applications for any scientific research. In addition, the political and economic situation in Russia, Ukraine, and the other countries participating in the science centers remains very uncertain and thus deters foreign investors. Every project proposal is also reviewed for potential proliferation concerns. The State Department chairs an interagency group, including representatives from the Departments of Defense and Energy and other national security agencies, that examines each proposal to ensure that the projects the United States funds have only peaceful applications. For example, according to State Department officials, a proposal to develop a rocket that could launch several satellites at once was rejected on the grounds that this same technology could also be used to launch multiple warheads. Careful examination of the proposed work is particularly critical in the biological area, where the division between offensive and defensive research is often difficult to determine. The proliferation review group also weighs the risks that financing certain projects could help sustain a weapons institute infrastructure in the former Soviet Union by keeping institutes in operation that might have curtailed their research functions for lack of funds. After proposals are reviewed for potential policy, science, and proliferation concerns, officials from the Departments of State, Defense, and Energy meet to develop the official U.S. position on which project proposals to fund. During final project selection, the interagency group considers the information and recommendations developed during the other reviews, supplemented by past experience with institutes and scientists, to reach consensus on each project. The group also weighs other considerations. For example, State Department and science center staff said that they try to provide funds for projects at as many institutes as possible. A project with relatively weak scientific merit might receive funding if it is at an institute of high interest to the United States due to proliferation concerns. When the group reaches consensus on which projects to fund, it passes these instructions on to the U.S. representatives on the centers’ governing boards. Representatives from the funding parties on each board then jointly decide which projects will receive funding. The next step is for a member of the science center’s staff to work with the project team to fine-tune the official project agreement. The staff member and the project team will revise the project’s workplan and make any modifications required by the funding party. For example, in some cases the State Department has required project teams to add a U.S.-based collaborator, agree to additional oversight, or change the project’s budget to allow scientists to travel to the West more frequently during the course of the project. The funding parties are not bound to make any payments related to a project until the final project agreement has their approval and has been signed by the science center’s executive director. Once the project agreement has been signed, the project can begin. According to State Department officials, they cannot fund all of the project proposals that meet the State Department’s selection criteria due to funding constraints. For example, in preparation for the March 2001 meeting of the governing board for the center in Russia, the Department reviewed 148 proposals and found that 92 met U.S. funding criteria. However, the State Department only funded the 31 proposals with the highest number of senior scientists, greatest scientific merit, and/or the involvement of institutes of particular proliferation concern. From 1994 through the end of 2000, the United States had funded 590 projects that employed about 9,700 senior scientists. Figure 5 shows the number of senior scientists who worked on one or more U.S.-funded projects during the course of each year. These figures increased steadily from 1994 through 1999 and decreased slightly during 2000. About 6,500 senior scientists worked on U.S.-funded projects during 2000. Since 1994, more than half of the total number of people employed by U.S.-funded projects have been senior scientists. Although the State Department knows how many scientists it has employed through the projects it has funded, it does not know what portion of the target population of senior weapons scientists it has reached. The estimated number of senior weapons scientists in the Soviet Union at the time of its collapse varies from 30,000 to 75,000 scientists. During the past decade, an unknown number of senior weapons scientists left their research institutes to pursue other forms of employment, retired, or died. At some of the research institutes we visited, the institute directors told us that about half of their staff left within 2 years of the collapse, although they stated most who left were junior scientists, technicians, and support staff. Given these uncertainties, the State Department can only estimate how much of the total population of senior scientists it has reached. For example, the 9,700 senior scientists employed by U.S.-funded projects to date could represent anywhere from 12 percent to 32 percent of the target population. According to the science centers, funding from all sources, including the United States, has employed about 21,000 senior scientists to date. The State Department does not directly monitor the activities or results of the work of scientists who are participating in U.S.-funded research projects. Instead, the Department relies on the mostly Russian and Ukrainian technical specialists and accountants at the science centers, overseen by managers from the United States, the European Union, Japan, and Canada, to monitor scientists’ progress in completing their research. The State Department also uses Department of Defense and outside auditors to conduct reviews of a sample of U.S.-funded projects. For the 35 projects we reviewed at nine institutes in Russia and Ukraine, the science centers were following their monitoring procedure. However, several factors limit the ability of the State Department to monitor the activities of scientists working on U.S.-funded projects. The State Department first relies on the mostly Russian and Ukrainian staff at the science centers to ensure that scientists are working on the research they are paid to produce. The science center staff do not observe the scientists on a day-to-day basis but rather (1) conduct on-site technical and financial monitoring at least once during each project, (2) review financial and technical reports submitted by the scientists, and (3) have frequent contacts with project scientists and receive input from U.S. and other western scientists who collaborate on the projects. For the 35 projects we reviewed, the science centers were following this monitoring procedure. Under the terms of the science center project agreements, science center staff have access to the locations where the research is conducted and to the personnel, equipment, and documentation associated with the projects. At least once during the course of a project, science center technical specialists and accountants spend a day at the institute to confirm that the research is progressing according to the project agreement by, among other things, conducting confidential interviews with individual scientists to discuss their involvement in the project; verifying that the amount of time scientists claim on their timesheets matches the financial reports submitted to the science centers; and discussing and observing project accomplishments such as results of experiments, prototypes of new technology, and computer simulations and databases. For the 35 projects we examined in detail, we found that the science center staff had generally followed their on-site monitoring procedures. The science centers had reports in their project files that documented the on-site monitoring. In addition, scientists we met with at the institutes described the on-site monitoring, including the questions asked during the confidential interviews. At one institute in Ukraine, we observed the science center staff conducting confidential interviews as part of on-site monitoring. The project agreements require the research institutes to submit quarterly financial reports and quarterly, annual, and final technical reports to the science centers. Only after performing routine checks of the financial reports do the science centers deposit the payments into the scientists’ individual bank accounts. The science centers also examine the technical reports to ensure that the project is achieving the technical results specified in the project agreement and determine whether the project is on schedule. For the 35 projects we selected, we verified that the science centers had received and analyzed the financial and technical reports required under the project agreements. In addition, scientists we spoke with at the research institutes also confirmed that they prepare and submit the reports according to the terms of the project agreements. In addition to the monitoring procedures provided under the project agreements, the science center staff have informal contact with scientists on the project team about once a week, which allows them to check on the status of projects on an ongoing basis. These frequent contacts occur when scientists purchase equipment through the science centers, make travel arrangements to participate in international conferences, or come to the science centers to use computers or submit reports in person. Each U.S.-funded project also has a U.S. or western collaborator, either a government agency or private company, that works with the scientists on the research. For example, collaborators attend international conferences with the scientists, visit the institutes to observe the project results, host visits by scientists to the United States, and sometimes conduct part of the research. The science centers seek feedback on the projects’ technical progress from the collaborators, who often have a high degree of expertise in the project area. When possible, the science centers also participate in meetings between the scientists and collaborators. Scientists at the research institutes we visited confirmed that they have frequent contact with the science center staff and collaborators. The State Department annually selects a number of U.S.-funded projects to be audited by the Defense Contract Audit Agency of the Department of Defense. During 1999 and 2000, the agency conducted 84 audits on behalf of the State Department. The auditors review financial reports submitted to the science centers and visit the institutes to interview selected scientists, examine timesheet completion procedures and individual scientists’ timesheets, and check the inventory of equipment purchased under the project. Based on these procedures, they determine, among other things, whether the scientists’ time records are reliable and maintained according to the terms of the project agreement and whether the weapons scientists working on the project are the same as those identified in the workplan. Technical auditors from U.S. industry or other government agencies accompanied the Defense Contract Audit Agency on 44 of the 84 audits conducted in 1999 and 2000. The technical auditors provided the scientific expertise necessary to evaluate the scientists’ technical performance and determine whether the amount of time the scientists claim they were working was commensurate with their technical performance, as documented in their scientific logbooks and research results. Because the technical auditors have the expertise to evaluate projects’ technical progress, the State Department wants technical auditors to accompany the Defense Contract Audit Agency on all future audits of science center projects. The science centers also undergo an annual external audit of their financial statements and project monitoring procedures. These external audits, conducted by international accounting firms hired by the science centers, include visits to research institutes to evaluate the science centers’ monitoring procedures and make recommendations regarding the ability of the science centers to monitor the amount of time that scientists spend on the science center projects. According to State Department and science center officials, the science centers take action to address deficiencies uncovered through monitoring. Science center officials stated that the problems they have uncovered through monitoring have been generally minor, for example, errors in conforming to science centers’ accounting requirements. At the science center in Ukraine, officials stated that the most serious violation they had uncovered was a scientist who was charging time to a project while he was in the hospital. They calculated how much he had been overpaid, and he paid the money back. External audits have found deficiencies in the timekeeping practices for a number of projects. For example, one audit found that some scientists had claimed more than the maximum amount of time they are allowed per year (220 days) and recommended additional procedures to prevent such occurrences in the future. The Defense Contract Audit Agency initially found some scientists were charging the science centers the amount of time that had been budgeted in the project workplan rather than the actual amount of time they had worked. Usually, the scientists told the auditors that they had worked more than amount of time they had claimed on their timesheets. For many projects, the technical auditors confirmed that the scientists were probably underreporting their time spent on the projects. However, the technical auditors for two projects at an institute in Russia found that some scientists could not provide sufficient evidence that they had worked on the projects for the time they had charged. The State Department temporarily ceased funding additional projects at this institute until the problem was resolved. Overall, according to the Defense Contract Audit Agency, the science centers have implemented procedures to reinforce correct timekeeping practices among project scientists, and the problems have lessened. The scope of State Department’s monitoring of scientists is limited to the implementation of science center projects. Under the terms of the project agreements, the science centers and external auditors only monitor scientists while they are working on science center projects; they cannot track what the scientists are doing while they are not working on the projects or after the projects end. Furthermore, the project agreements do not prohibit the scientists from continuing to work on research for their institutes including, in Russia, research related to nuclear weapons. Although scientists may volunteer information about their other research activities, the State Department has no formal way to monitor what other research these scientists are performing or for whom they are performing it. This limitation is particularly relevant for scientists who work only part- time on science center projects. As shown in figure 6, during 2000 very few senior scientists worked full-time (defined by both science centers as 220 working days per calendar year). Seventy-five percent worked 4 ½ months (100 days) or less on a science center project during 2000, and some worked just a few days during the year. In addition, the project agreements only provide the science centers and external auditors access to institutes’ records related to projects funded by the science centers. The lack of access to records related to what the scientists are doing while they are not working on science center projects limits the ability of the science centers and external auditors to independently confirm the information that the scientists do provide about their activities. For example, monitoring cannot confirm whether scientists are receiving pay from other sources for the time they claim they are working on science center projects. Finally, the project agreements require that auditors and science center staff provide the institutes with 20 to 35 days advance notice before making visits to conduct on-site monitoring. According to State Department and Defense Contract Audit Agency officials, the advance notice limits the element of surprise and gives project scientists the opportunity to cover up deficiencies in their adherence to the project agreements. In written comments provided on a draft of this report, the Department of State concurred with the report’s major findings. However, the Department provided additional information to clarify specific sections of the draft report. Specifically, the Department agreed with our finding that it relied on Russian and Ukrainian specialists to monitor the science center projects. However, the Department stated that it is confident that the specialists’ monitoring efforts comply with western standards and that the majority of these individuals are former Soviet weapons scientists who are now committed to the mission and nonproliferation objectives of the science centers. The Department also agreed with our finding that there are no reliable estimates on the total population of senior weapons scientists. However, the Department stated that anecdotal evidence suggests that the United States and other funding parties have engaged about half of the population of senior weapons scientists. Finally, the Department stated that while it would be impractical for the United States to keep track of the activities of the weapons scientists when they are not working for the science centers, the Department cited examples of how it maintains contact with current and past participants to varying degrees. The Department’s comments are presented in appendix I. To review the State Department’s project selection process, we met with officials from the Departments of State and Defense and the Department of Energy’s national laboratories who participate in the process. We also attended one meeting of the science advisers. We discussed the program’s scope and limitations with officials from the Departments of State and Defense and the U.S. national laboratories, as well as with U.S. representatives on the governing boards of both science centers. We also discussed these issues with the senior management at both centers. In addition, we reviewed the science centers’ agreements, statutes, and annual reports. The statistical data were compiled from reports obtained from the Chief Financial Officers at both centers. To examine the monitoring procedures used to check whether scientists are working on the peaceful research they are paid to produce, we first met with State Department officials to discuss what monitoring procedures were in place. We then examined each component of the monitoring process in detail, as follows: We met with auditors from the Defense Contract Audit Agency and science advisers from the national laboratories to learn how they conduct their monitoring activities. We then reviewed the Defense Contract Audit Agency’s reports on its audits of U.S.-funded science center projects conducted during 1999 and 2000. We reviewed the reports prepared by the external auditors for both science centers and met with representatives from the firm that conducted the most recent audit of the center in Russia. We visited the science centers in Russia and Ukraine and met with officials at all levels of these organizations including the Executive Directors, Deputy Executive Directors, Chief Financial Officers, technical specialists, and members of the financial staff to discuss how they conduct technical and financial monitoring of projects. We compared these discussions with the centers’ written guidance. We also reviewed in detail the project documentation, including financial, technical, and monitoring reports, for 35 projects that had received U.S. funds. To verify that the monitoring process detailed in science center documents was actually taking place, we visited the following nine institutes in Russia and Ukraine where the 35 projects had recently been completed or were currently underway: Paton Electric Welding Institute, Kiev, Ukraine (nuclear, chemical, and missile) Institute of Semiconductor Physics, Kiev, Ukraine (nuclear and missile) Frantsevich Institute for Problems of Materials Science, Kiev, Ukraine (nuclear and missile) Moscow Engineering Physics Institute, Moscow, Russia (nuclear) All-Russia Research Institute of Automatics, Moscow, Russia (nuclear) State Scientific Research Institute of Organic Chemistry and Technology, Moscow, Russia (chemical) State Scientific Institute of Immunological Engineering, Lyubuchany, Russia (biological) State Research Center for Applied Microbiology, Obolensk, Russia(biological) Central Aerohydrodynamic Institute, Zhukovsky, Russia (aeronautics/missile) In selecting the 35 projects, we chose institutes that collectively did work in the four areas of proliferation concern. During our visits, we met with the institutes’ directors and members of each project team. In many cases, we also toured the facilities where they conducted their work. Although we only selected projects to review that had received U.S. funds, in some cases other donors had also provided financial support. We performed our work from December 2000 through April 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees and the Honorable Colin Powell, Secretary of State. Copies will also be made available to others upon request. If you or your staff have any questions about this report, please contact me on (202) 512-4128. Another GAO contact and staff acknowledgments are listed in appendix II. In addition to the person named above, Joe Cook, Dave Maurer, and Valérie Nowak made key contributions to this report. | Since 1994, the United States has appropriated $227 million to support two multilateral science centers in Russia and Ukraine. The science centers pay scientists who once developed nuclear, chemical, and biological weapons and missile systems for the Soviet Union to conduct peaceful research. By employing scientists at the science centers, the United States seeks to reduce the risks that these scientists could be tempted to sell their expertise to terrorists. This report examines the (1) selection procedures the State Department uses to fund projects that meet program objectives and (2) monitoring procedures the State Department uses to verify that scientists are working on the peaceful research they are paid to produce. GAO found that State lacks complete information on the total number and locations of senior scientists and has not been granted access to senior scientists at critical research institutes under the Russian Ministry of Defense. GAO also found that State has designed an interagency review process to select and fund research proposals submitted by weapons scientists to the science centers in Russia and Ukraine. The overall goal is to select projects that reduce proliferation risks to the United States and employ as many senior scientists as possible. The science centers were following their monitoring processes and were taking steps to address audit deficiencies. |
As with servicemembers and federal workers, industry personnel must obtain a security clearance to gain access to classified information, which is categorized into three levels: top secret, secret, and confidential. The level of classification denotes the degree of protection required for information and the amount of damage that unauthorized disclosure could reasonably be expected to cause to national defense or foreign relations. For top secret information, the expected damage that unauthorized disclosure could reasonably be expected to cause is “exceptionally grave damage”; for secret information, it is “serious damage”; and for confidential information, it is “damage.” To ensure the trustworthiness, reliability, and character of personnel in positions with access to classified information, DOD relies on a multiphased personnel security clearance process. Figure 1 shows six phases that could be involved in determining whether to grant an actual or a potential job incumbent a clearance. The three phases shown in gray are those that are most transparent to individuals requesting an initial clearance. Such individuals may not have been aware that they are allowed to apply for a clearance only if a contractor determines that access is essential in the performance of tasks or services related to the fulfillment of a classified contract (Phase 1), have certain appeal rights if their clearance request is denied or their clearance is subsequently revoked (Phase 5), and may need to renew their clearance in the future if they occupy their position for an extended period (Phase 6). In the application-submission phase, if a position requires a clearance (as has been determined in Phase 1), then the facility security officer must request an investigation of the individual. The request could be the result of needing to fill a new position for a recent contract, replacing an employee in an existing position, renewing the clearance of an individual who is due for clearance updating (Phase 6), or processing a request for a future employee in advance of the hiring date. Once the requirement for a security clearance is established, the industry employee completes a personnel security questionnaire using OPM’s Electronic Questionnaires for Investigations Processing (e-QIP) system, or a paper copy of the standard form 86. After a review, the facility security officer submits the questionnaire and other information such as fingerprints to OPM. In the investigation stage, OPM or one of its contractors conducts the actual investigation of the industry employee by using standards that were established governmentwide in 1997. As table 1 shows, the type of information gathered in an investigation depends on the level of clearance needed and whether an investigation for an initial clearance or a reinvestigation for a clearance update is being conducted. For either an initial investigation or a reinvestigation for a confidential or secret clearance, investigators gather much of the information electronically. For a top secret clearance, investigators gather additional information that requires much more time-consuming efforts, such as traveling, obtaining police and court records, and arranging and conducting interviews. In August 2006, OPM estimated that approximately 60 total staff hours are needed for each investigation for an initial top secret clearance and 6 total staff hours are needed for the investigation to support a secret or confidential clearance. After the investigation is complete, OPM forwards a paper copy of the investigative report to DISCO for adjudication. In the adjudication stage, DISCO or some other adjudication facility uses the information from the investigative report to determine whether an individual is eligible for a security clearance. For our May 2004 report, an OUSD(I) official estimated that it took three times longer to adjudicate a top secret clearance than it did to adjudicate a secret or confidential clearance. If the report is determined to be a “clean” case—a case that contains no or minimal potential security issues—the DISCO adjudicators determine eligibility for a clearance. However, if the case is determined to be an “issue” case—a case containing information that might disqualify an individual for a clearance (e.g., serious foreign connections or drug- or alcohol-related problems)—then DISCO forwards the case to DOHA adjudicators for the clearance-eligibility decisions. Regardless of which office renders the adjudication to approve, deny, or revoke eligibility for a security clearance, DISCO issues the clearance-eligibility decision and forwards the determination to the industrial contractor. All adjudications are based on 13 federal adjudicative guidelines established governmentwide in 1997 and implemented by DOD in 1998 (see app. II). The President approved an update of the adjudication guidelines on December 29, 2005. According to OMB, DOD should be using these updated guidelines. Industry personnel contracted to work for the federal government waited more than one year on average to receive top secret security clearances, and government statistics did not portray the full length of time it takes many applicants to obtain a clearance. Industry personnel in the population from which our sample was randomly selected waited on average over one year for initial clearances and almost a year and a half for clearance updates. The phase of the process between the time an applicant submits his or her application and the time the investigation actually begins averaged over 3 months, and government statistics did not fully account for the time required to complete this phase. In addition, the investigative phase for industry personnel was not timely, and government statistics did not account for the full extent of the delay. Delays in the clearance process may cost money and pose threats to national security. Industry personnel granted eligibility for top secret clearances from DISCO in January and February 2006 waited an average of 446 days for their initial clearance or 545 days for their clearance update. DISCO may, however, have issued an interim clearance to some of these industry personnel, which might have allowed them to begin classified work for many contracts. Beginning in December 2006, IRTPA will require that 80 percent of all clearances—regardless of clearance level—be completed in an average of 120 days. The government plan for improving the personnel security clearance process provides quarterly goals for various types of initial clearances. Since the completion of initial clearances is given priority over completion of clearance updates, much of our discussion in this section focuses on the timeliness of initial clearances. The application-submission phase of the clearance process took on average 111 days for the initial clearances that DISCO adjudicated in January and February 2006 (see table 2). The starting point for our measurement of this phase was the date when the application was submitted by the facility security officer. Our end point for this phase was the date that OPM scheduled the investigation into its Personnel Investigations Processing System (PIPS). We used this starting date because the government can begin to incur an economic cost if an industry employee cannot begin work on a classified contract because of delays in obtaining a security clearance and this end date because OPM currently uses this date as its start point for the next phase in the clearance process. The governmentwide plan on improving the clearance process noted that “investigation submission” (i.e., application- submission) be completed in 14 calendar days or less. Therefore, the 111 days taken for the application-submission phase took nearly 100 more days on average than allocated. Several factors contribute to the amount of time we observed in the application-submission phase, including rejecting applications multiple times, multiple completeness reviews, and manually entering data from paper applications. For example, an April 2006 DOD Office of Inspector General report cited instances where OPM rejected applications multiple times due to inaccurate information. The security managers interviewed for that report said it appeared that OPM did not review the entire documents for all errors before returning them. Security managers at two DOD locations noted that in some cases OPM had rejected the same application submission three or four times for inaccurate information. The cited inaccuracies included outdated references, telephone numbers, and signatures, as well as incorrect zip codes. Another source of delay is the multiple levels of review that are performed before the application is accepted. Reviews of the clearance application might include the corporate facility security officer, DISCO adjudicators, and OPM staff. A third source of the delay in the application-submission phase is the time that it takes OPM to key-enter data from paper applications. For April 2006, OPM’s Associate Director in charge of the investigations unit stated that applications submitted on paper took an average of 14 days longer than submissions through OPM’s electronic Questionnaires for Investigations Processing (e-QIP). She also noted additional information on e-QIP that could portend future timeliness improvements governmentwide: in May 2006, over 221,000 investigations had been requested through e-QIP by 50 agencies (up from 17,000 submissions by 27 agencies in June 2005), and a goal to reduce the rejection rate for e-QIP applications from the current 9 percent to 5 percent. The gray portion of table 2’s application-submission phase identifies some tasks that are not currently included in the investigation phase of the clearance process but which could be included in the investigation phase, depending on the interpretation of what constitutes “receipt of the application for a security clearance by an authorized investigative agency”—IRTPA’s start date for the investigations phase. Investigations for the initial top secret clearances of industry personnel took an average of 286 days for DISCO cases adjudicated in January and February 2006 (see table 2). During the same period, investigations for top secret clearance updates took an average of 419 days, almost 1½ times as long as the initial investigations. Compared to our findings, OPM reported that the time required to complete initial investigations for top secret clearances was much shorter when it analyzed governmentwide data for April 2006. The newer data indicate that OPM completed the initial investigations in 171 days. While some of that difference in investigation times reported by GAO and OPM may be related to better productivity, a later section of this report identifies other factors that could have contributed to the difference. The shorter period of 171 days is less than the 180 days provided as a goal in the governmentwide plan. But the methods for computing the 171 days may not have included the total average time required to complete an initial clearance. Many factors impede the speed with which OPM can deliver investigative reports to DISCO and other adjudication facilities. As we have previously identified, DOD’s inability to accurately project the number of requests for security clearances is a major impediment to investigative workload planning and clearance timeliness. As we noted in 2004 when both OPM and DOD were struggling to improve investigation timeliness, backlogged investigations contributed to delays because most new requests for investigations remain largely dormant until earlier requests are completed. The governmentwide plan for improving the personnel security clearance process also asserted that while the total number of OPM federal and contract investigators was sufficient to meet the timeliness requirements of the IRTPA, many of the investigative staff are relatively inexperienced and do not perform at a full-performance level. In May 2006, we noted that OPM reported progress in developing an overseas presence to investigate leads overseas, but acknowledged it will take time to fully meet the full demand for overseas investigative coverage. In May 2006, the Associate Director in charge of OPM’s investigations unit indicated that her unit continues to have difficulty obtaining national, state, and local records from third-party providers. Similarly, representatives for contractors and their associations are concerned that new investigative requirements like those in Homeland Security Presidential Directive-12 could further slow responses to OPM’s requests for information from national, state, and local agencies. Finally, more requests for top secret clearances could slow OPM’s ability to meet the IRTPA timeliness requirements, since investigations for that level of clearance are estimated to take 10 times the number of staff hours as do investigations for secret and confidential clearances. DISCO adjudicators took an average of 39 days to grant initial clearance eligibility to the industry personnel whose cases were decided in January and February 2006 (see table 2). The measurement of this phase for our analysis used the same start and stop dates that OPM uses in its reports, starting on the date that OPM closed the report and continuing through the date that DISCO adjudicators decided clearance eligibility. In December 2006, IRTPA will require that at least 80 percent of the adjudications be completed within 30 days. As of June 2006, DISCO reported that it had adjudicated 82 percent of its initial top secret clearances within 30 days. In its report, DISCO excluded the time required to print and transfer investigative reports from OPM to DISCO. Two data reliability concerns make it difficult to interpret statistics for the adjudication phase of the clearance process. First, the activities in the gray section in the adjudication phase of table 2 show that the government’s current procedures for measuring the time required for the adjudication phase include tasks that occur before adjudicators actually receive the investigative reports from OPM. Although the information that we analyzed could not be used to determine how much time had elapsed before DISCO received the investigative reports, DOD adjudication officials recently estimated that these printing and transfer tasks had taken 2 to 3 weeks. OUSD(I) and adjudication officials said that inclusion of this time in the adjudication phase holds adjudicators accountable for time that is not currently in their control. They acknowledge that OPM has offered faster electronic delivery of the investigative reports, but they countered that they would need to then print the reports since the files are not offered in an electronic format that would allow the adjudicators to easily use the electronic information. The second data reliability problem is DOD’s nonreporting of final dates of adjudication decisions to OPM. While we had the dates that the clearance eligibility was determined for our data, OPM officials have noted that DOD departmentwide reported about 10 percent of its adjudication decisions back to OPM for August 2006. Although OPM reports this information as specified by the government plan for improving the security clearance process, OPM officials acknowledged that they have not enforced the need to report this information. When asked about this issue, DOD officials indicated that OPM would not accept a download of adjudication dates from JPAS that DOD had offered to provide on compact discs. Since DOD represents about 80 percent of the security clearances adjudicated by the federal government, not including these data could make it appear as if adjudication timeliness is different than it actually is. In 2004, we outlined unnecessary costs and threats to national security that result from delays in determining clearance eligibility. Those same negative consequences apply today. Delays in completing initial security clearances may have an economic impact on the costs of performing classified work within or for the U.S. government. In a 1981 report, we estimated that DOD’s investigative backlog of overdue clearances cost nearly $1 billion per year in lost productivity. More than a decade later, a Joint Security Commission report noted that the costs directly attributable to investigative delays in fiscal year 1994 could have been as high as several billion dollars because workers were unable to perform their jobs while awaiting a clearance. While newer overall cost estimates are not available, the underlying reasons—the delays in determining clearance eligibility that we documented in this report—still exist today. The impact of delays in completing initial clearances affects industry, and therefore affects the U.S. government, which is funding the work that requires the clearances. In a May 2006 congressional hearing, a representative for a technology association testified that retaining qualified personnel is resulting in salary premiums as high as 25 percent for current clearance holders. The association representative went on to note that such premiums raise costs to industry, which in turn passes on the costs to the government and taxpayers. In 2004, representatives of a company with $1 billion per year in sales stated that their company offered $10,000 bonuses to its employees for each person recruited who already had a security clearance. In cases where recruits left for the company in question, their former companies faced the possibility of having to back- fill a position, as well as possibly settling for a lower level of contract performance while a new employee was found, obtained a clearance, and learned the former employee’s job. Also, industry representatives discussed instances where their companies gave hiring preferences to cleared personnel who could do the job but were less qualified than others who did not possess a clearance. The chair of the interagency Personnel Security Working Group at the time of our 2004 report noted that a company might hire an employee and begin paying that individual, but not assign any work to the individual until a clearance is obtained. Also, the head of the interagency group noted that commands, agencies, and industry might incur lost-opportunity costs if the individual chooses to work somewhere else rather than wait to get the clearance before beginning work. The negative effects of the failure to deliver timely determinations of initial clearance eligibility extend beyond industry personnel to servicemembers and federal employees. An April 2006 DOD Office of Inspector General report provided examples to illustrate how delays in the clearance process can result in negative consequences such as nonproductive time waiting for a clearance. That report noted that delays have caused students at military training facilities to remain in a holdover status while waiting for a final clearance to complete training courses, graduate, or deploy. In addition, students without a final clearance may have their duty stations changed, which impacts their ability to fully support DOD missions for which they were trained. Delays in completing clearance updates can have serious but different negative consequences than those stemming from delays in completing initial clearance-eligibility determinations. Delays in completing clearance updates may lead to a heightened risk of national security breaches. Such breaches involve the unauthorized disclosure of classified information, which can have effects that range from exceptionally grave damage to national security in the case of top secret information to damage in the case of confidential information. In 1999, the Joint Security Commission reported that delays in initiating investigations for clearance updates create risks to national security because the longer individuals hold clearances the more likely they are to be working with critical information systems. The timeliness statistics that OPM provided in the recent congressional hearings do not convey the full magnitude of the investigations-related delays facing the government. In her May 17, 2006, congressional testimony statement, the Associate Director in charge of OPM’s investigations unit said that OPM continued to make “significant progress” in reducing the amount of time needed to complete initial security clearance investigations. She supported her statement with statistics that showed OPM’s initial investigations for top secret clearances governmentwide averaged 284 days in June 2005 and decreased to 171 days in April 2006 (see table 3). When we converted these two timeliness statistics to a percentage, we found that the average time to complete an investigation for an initial top secret clearance in April 2006 was about 60 percent of what it had been in June 2005. We also calculated the percentage change for the numbers of investigations completed in the same 2 months and found that OPM had completed about 68 percent (5,751 versus 8,430) as many initial investigations for top secret clearances in April 2006 as it did in June 2005. Her statement went on to mention that another problem was developing—the inventory of pending investigations was increasing because of difficulty obtaining information from third-party providers. The testimony statement did not provide timeliness statistics for the investigations that are conducted for clearance updates, but that type of investigation probably had longer completion times than did the initial investigations. Our previously reviewed statistics on industry personnel (see table 2) indicated that clearance update investigations took about 1½ times as long as the initial investigations. The absence of information on clearance-update investigations from the OPM’s Associate Director’s testimony statement may be partially explained by the higher priority that OMB and OPM have placed on completing initial clearances so that individuals who have not previously had clearances can begin classified work sooner. At the same time, the absence of information on clearance- update investigations does not provide all stakeholders—Congress, agencies, contractors attempting to fulfill their contracts, and employees awaiting their clearances—with a complete picture of clearance delays. We have noted in the past that focusing on completing initial clearance investigations could negatively affect the completion of clearance-update investigations and thereby increase the risk of unauthorized disclosure of classified information. The testimony statement did not indicate whether or not the statistics on complete investigations included a type of incomplete investigation that OPM sometimes treats as being complete. In our February 2004 report, we noted that OPM’s issuance of “closed pending” investigations— investigative reports sent to adjudication facilities without one or more types of source data required by the federal investigative standards— causes ambiguity in defining and accurately estimating the backlog of overdue investigations. In our February 2004 report, we also noted that cases that are closed pending the provision of additional information should continue to be tracked separately in the investigations phase of the clearance process. According to recently released OPM data, between February 20, 2005, and July 1, 2006, the number of initial top secret clearance investigative reports that were closed pending the provision of additional information increased from 14,841 to 18,849, a 27 percent increase. DISCO officials and representatives from some other DOD adjudication facilities have indicated that they will not adjudicate closed pending cases since critical information is missing. OPM, however, has stated that other federal agencies review the investigative reports from closed pending cases and may determine that they have enough information for adjudication. Combining partially completed investigations with fully completed investigations overstates how quickly OPM is supplying adjudication facilities with the information they request to make their clearance-eligibility determinations. OPM told us that it does not continue counting the time when agencies return investigative reports for rework because they were in some way deficient. Instead, OPM begins the count of days in the investigative phase anew. OPM says that approximately 1 to 2 percent of its investigations are reopened for such work. OPM has indicated that system problems prevent them from continuing to monitor these returned investigations as a continuation of the prior investigations. By not fully capturing all investigative time—including the review time which occurred at the adjudication facility and resulted in returning a report—OPM is undercounting the number of days that it takes to conduct an investigation. Finally, our analysis of OPM’s quarterly reports, which are provided to OMB and Congress, revealed computational errors. For example, using information from such reports, we found that the number of adjudications completed in the second quarter of 2006 was off by about 12,000 cases. One reason for the errors was mistakes in the programs used to extract the data from OPM’s database, rather than the use of a documented and verified computer program that can be used again as data are updated. Without complete and accurate data and analyses, Congress, OMB, and others do not have full visibility over the timeliness of the clearance process. OPM provided incomplete investigative reports to DOD adjudicators, which they used to determine top secret clearance eligibility. Almost all (47 of 50) of the sampled investigative reports we reviewed were incomplete based on requirements in the federal investigative standards. In addition, DISCO adjudicators granted clearance eligibility without requesting additional information for any of the incomplete investigative reports and did not document that they considered some adjudicative guidelines when adverse information was present in some reports. Granting clearances based on incomplete investigative reports increases risks to national security. In addition, use of incomplete investigative reports and not fully documenting adjudicative considerations may undermine the government’s efforts to increase the acceptance of security clearances granted by other federal agencies. In our review of 50 initial investigations randomly sampled from the population used in our timeliness analyses, we found that almost all (47 of 50) of the investigative reports were missing documentation required by the federal investigative standards. The missing data were of two general types: (1) the absence of documentation showing that an investigator gathered the prescribed information in each of the applicable 13 investigative areas and included requisite forms in the investigative report, and (2) information to help resolve issues (such as conflicting information on indebtedness) that were raised in other parts of the investigative report. The requirements for gathering these types of information were identified in federal investigative standards published about a decade ago. We categorized an investigative area as incomplete if the investigative report did not contain all of the required documentation for that area or issue resolution. For example, we categorized the employment area as incomplete if investigators did not document a check of the subject’s employee personnel file or the required number of interviews of employment references such as supervisors and coworkers. At least half of the 50 reports that we examined did not contain the required documentation in three investigative areas: residence, employment, and education (see fig. 2). In addition, many investigative reports contained multiple deficiencies within each of these areas. For example, multiple deficiencies might be present in the residence area because investigators did not document a rental record check and an interview with a neighborhood reference. Looking at the data for figure 2 in a different way shows that three of every five reports that we reviewed had at least three investigative areas that did not have all of the prescribed documentation. Thirty-eight of the 50 investigative reports had two to four investigative areas with at least one piece of missing documentation (see fig. 3). The following examples illustrate some of the types of documentation missing from the investigative reports that we reviewed. When we discussed our findings for these investigative reports with OPM Quality Management officials, they agreed that the OPM investigators should have included documentation in the identified investigative areas. Residence, social, and employment documentation were missing. One investigative report did not have documentation on all of the required residence interviews or to show they checked rental records at two of the subjects’ residences. In addition, it contained no information from required investigator-developed social references, but information from interviews with two subject-identified social references was in the report. Federal investigative standards require investigators to interview at least two of the subject-identified social references and two additional social references that the investigator develops during the course of the investigation. Finally, investigators documented performing only 3 of the 10 employment interviews that would be required for the subject’s five jobs covered by the investigative scope. Residence, social, and employment documentation were missing. An investigative report on a DOD industry employee did not contain documentation on interviews with any neighborhood references where the subject had resided for 10 years. Similarly, the report contained interview documentation from one subject-identified but no investigator-developed social reference. Of the eight employment reference interviews required by federal standards for this investigative report, there was documentation that three were performed. Spouse national record documentation was missing. In another investigative report, required documentation for four national agency record checks of the subject’s cohabitant of 35 years was missing. The four types of missing checks were the Federal Bureau of Investigation name and fingerprints, OPM’s Security/Suitability Investigations Index, and DOD’s Spouse Defense Clearance and Investigations Index. Although federal standards indicate that investigations may be expanded as necessary to resolve issues, according to OPM, (1) issue resolution is a standard part of all initial investigations and periodic reinvestigations for top secret clearances and (2) all issues developed during the course of an investigation should be fully resolved in the final investigative report provided to DOD. We found a total of 36 unresolved issues in 27 of the investigative reports. The three investigative areas with the most unresolved issues were financial consideration, foreign influence, and personal conduct (see fig. 4). The following examples highlight investigative areas that lacked the documentation needed to resolve an issue. When we reviewed these investigative reports with OPM Quality Management officials, they agreed that the investigators should have included documentation to resolve the issues. Personal conduct and financial issues were unresolved. One investigative report did not contain documentation of the resolution of possible extramarital affairs and financial delinquency. During the course of the investigation, the subject reported having extramarital affairs; however, there was no documentation to show that these affairs had been investigated further. Also, the subject’s clearance application indicated cohabitation with an individual with whom the subject had previously had a romantic relationship, but there was no documentation that record checks were performed on the cohabitant. Moreover, information in the investigative report indicated that the subject defaulted on a loan with a balance of several thousand dollars; however, no other documentation suggested that this issue was explored further. Foreign influence issues were unresolved. The clearance application showed that the subject had traveled to an Asian country to visit family. However, in the subject interview, the subject reported not knowing the names of the family members or the city in which one relative lived. There was no documentation in other parts of the investigative report of a follow-up discussion with the subject about this issue. Financial issues were unresolved. An industry employee indicated “no” in his clearance application when asked if during the last 7 years he had a lien placed against his property for failing to pay taxes or other debt, but information in another part of the investigative report indicated that a tax lien in the tens of thousands of dollars had been placed against his property. The investigative report did not have additional information to indicate whether or not investigators asked the subject about the omission on the application or the tax lien itself. Although we found that the interview narratives in some of the 50 OPM investigative reports were limited in content, we did not identify them as being deficient for the purposes of our statistical analysis because such an evaluation would have required a subjective assessment that we were not willing to make. For example, in our assessment of the presence or absence of documentation, we found a 35-word narrative for a subject interview of a naturalized citizen from an Asian country. It stated only that the subject did not have any foreign contacts in his birth country and that he spent his time with family and participated in sports. Nevertheless, others with more adjudicative expertise voiced concern about the issue of documentation adequacy. At their monthly meeting in April 2006, top officials representing DOD’s adjudication facilities were in agreement that OPM-provided investigative summaries had been inadequate. The OPM Investigator’s Handbook provides guidance that directs investigators to be brief in the interview narratives but not to sacrifice content. Narrative documentation is required for subject interviews and all interviews with references contacted in the investigation, including neighbors, character references, and coworkers. The Associate Director of OPM’s investigations unit and her Quality Management officials cited the inexperience of the investigative workforce as one of the possible causes for the incomplete investigative reports we reviewed. This inexperience is due to the fact that OPM has rapidly increased the size of the investigative workforce. In December 2003, GAO estimated that OPM and DOD had around 4,200 full-time equivalent investigative personnel. In May 2006, the Associate Director said that OPM had over 8,600 employees. The Associate Director also indicated that variations in the training provided to federal and contractor investigative staff could be another reason for the incompleteness. These variations can occur since each contract investigative company is responsible for developing the training course for its employees. She, however, added that OPM (1) publishes the Investigator’s Handbook that provides guidance on how to conduct an investigation and forms the basis for the training, (2) approves the training curriculum for each contractor, and (3) occasionally monitors actual training sessions. The Associate Director also noted that she had little indication from her customers—adjudicators—that the investigative reports had problems since adjudicative facilities were returning 1 to 2 percent of the reports for rework. In our November 2005 testimony evaluating the government plan for improving the personnel security clearance process, we noted that the number of investigations returned for rework is not by itself a valid indicator of the quality of investigative work because adjudication officials said they were reluctant to return incomplete investigations in anticipation of further delays. We went on to say in November 2005 that regardless of whether that metric remains a part of the government plan, developers of the plan may want to consider adding other indicators of the quality of investigations. When we asked if OMB and OPM had made changes to the government plan to address quality-measurement and other shortcomings that we had identified in our November 2005 testimony, the Associate Director said the plan had not been modified to address our concerns but implementation of the plan was continuing. OPM’s Associate Director outlined new quality control procedures that were put in place after the investigations that we reviewed were completed. Among other things, OPM has a new contractor responsible for reviewing the quality of its investigative reports, a new organizational structure for its quality control group, and new quality control processes. After describing these changes, the Associate Director acknowledged that it will take time before the positive effects from the changes will be fully realized. DISCO adjudicators granted top secret clearance eligibility for the 27 industry personnel whose investigative reports contained unresolved issues without requesting additional information or documenting in the adjudicative report that the information was missing. Furthermore, in 17 cases, adjudicators did not document consideration of guidelines. In making clearance-eligibility determinations, the federal guidelines require adjudicators to consider (1) guidelines covering 13 specific areas such as foreign influence and financial considerations, (2) adverse conditions or conduct that could raise a security concern and factors that might mitigate (alleviate) the condition for each guideline, and (3) general factors related to the whole person. (See app. II for additional details on these three types of adjudicative considerations.) According to a DISCO official, DISCO and other DOD adjudicators are to record information relevant to each of their eligibility determinations in JPAS. They do this by selecting applicable guidelines and mitigating factors from prelisted responses and may type up to 3,000 characters of additional information. DISCO adjudicators granted clearance eligibility for 27 industry personnel whose investigative reports did not contain the required documentation to resolve issues raised in other parts of the investigative reports (see fig. 4). The corresponding adjudicative reports for the 27 industry personnel did not contain documentation showing that adjudicators had identified the information as missing or that they attempted to return the investigative reports to obtain the information required by the federal adjudicative guidelines. The following are examples of unresolved issues that we found in adjudicative and investigative reports and later discussed with DISCO officials, including administrators and adjudicators. For both examples, the DISCO officials agreed that additional information should have been obtained to resolve the issues before the industry personnel were granted top secret clearances. Information to resolve a foreign influence issue was missing. A state-level record check on an industry employee indicated that the subject was part owner of a foreign-owned corporation. Although the DISCO adjudicator applied the foreign influence guideline for the subject’s foreign travel and mitigated that foreign influence issue, there was no documentation in the adjudicative report to acknowledge or mitigate the foreign-owned business. Information to resolve a foreign influence issue was missing. An industry employee reported overseas employment on their application, but the subjects adjudicative and investigative reports did not contain other documentation of the 6 years (all within the scope of the investigation) that they spent working for a DOD contractor in two European countries. For example, the subject interview documentation did not indicate whether the subject’s relationships with foreign nationals had been addressed. The adjudicative and investigative reports did not document verification of the subject’s residence and interviews with overseas social references. Furthermore, the adjudicative report did not indicate that the foreign influence guideline was considered as part of the clearance determination. When asked why the adjudicators did not provide the required documentation in JPAS, the DISCO officials said that its adjudicators review the investigative reports for sufficient documentation to resolve issues and will ask OPM to reopen a case if they do not have enough information to reach an eligibility determination. The DISCO officials and Defense Security Service Academy personnel who teach adjudicator training courses cited risk management as a reason that clearance determinations are made without full documentation. They said that adjudicators make judgment calls about the amount of risk associated with each case by weighing a variety of past and present, favorable and unfavorable information about the person to reach an eligibility determination. The trainers also said that adjudicators understand that investigators may not be able to obtain all of the information needed to resolve all issues. Notably, DISCO and DOHA officials told us that DISCO adjudicators determine eligibility for cases with few or no issues and that DOHA adjudicates cases with potentially more serious issues. Seventeen of the 50 adjudicative reports were missing documentation on a total of 22 guidelines for which issues were present in the investigative reports. The guideline documentation missing most often was for foreign influence, financial considerations, alcohol consumption, and personal conduct issues (see fig. 5). We, like DISCO adjudicators, used the Adjudicative Desk Reference and DOD’s Decision Logic Table to help determine whether or not documentation of a guideline was needed. An example of the lack of documentation shown in figure 5 was when DISCO adjudicators did not record consideration of the personal conduct guideline despite a subject’s involvement in an automobile accident while driving with a suspended driver’s license, no auto insurance, and an expired car license. DISCO officials stated that procedural changes associated with JPAS implementation contributed to the missing documentation on guidelines. DISCO began using JPAS in February 2003, and it became the official system for all DOD adjudications in February 2005. Before February 2005, DISCO adjudicators were not required to document the consideration of a guideline issue unless adverse information could disqualify an individual from being granted clearance eligibility. After JPAS implementation, DISCO adjudicators were trained to document in JPAS their rationale for the clearance determination and the adverse information from the investigative report, regardless of whether or not an adjudicative guideline issue could disqualify an individual from obtaining a clearance. The administrators also attributed the missing guideline documentation to a few adjudicators attempting to produce more adjudication determinations. Decisions to grant clearances based on incomplete investigations increase risks to national security because individuals can gain access to classified information without being vetted against the full federal standards and guidelines. Although there is no guarantee that individuals granted clearances based on complete investigations will not engage in espionage activities, complete investigations are a critical first step in ensuring that those granted access to classified information can be trusted to safeguard it. Adjudicators’ reviews of incomplete investigative reports can have negative economic consequences for adjudication facilities, regardless of whether the incomplete report is (1) an inadvertent failure by OPM to detect the missing information during its quality control procedures or (2) a conscious decision to forward a closed pending case that OPM knows is not complete. Specifically, adjudication facilities must use adjudicator time to review cases more than once and then use additional time to document problems with the incomplete investigative reports. Conversely, an adjudicative review of incomplete cases could have the benefit of alerting adjudicators to negative information on a person who has been granted an interim initial clearance so that the adjudication facility could determine whether that interim clearance should be revoked pending a full investigative report. Incomplete investigations and adjudications undermine the government’s efforts to move toward greater clearance and access reciprocity. An interagency working group, the Security Clearance Oversight Steering Committee, has noted that agencies are reluctant to be accountable for poor quality investigations and/or adjudications conducted by other agencies or organizations. To achieve fuller reciprocity, clearance-granting agencies need to have confidence in the quality of the clearance process. Without full documentation of investigative actions, information obtained, and adjudicative decisions, agencies could continue to require duplicative investigations and adjudications. Incomplete timeliness data limit the visibility of stakeholders and decision makers in their efforts to address long-standing delays in the personnel security clearance process. For example, not accounting for all of the time that is required when industry personnel submit an application multiple times before it is accepted limits the government’s ability to accurately monitor the time required for each step in the application-submission phase and identify positive steps that facility security officers, DISCO adjudicators, OPM investigative staff, and other stakeholders can take to speed the process. Similarly, OPM’s procedure of restarting the measurement of investigation time for the 1 to 2 percent of investigative reports that are sent back for quality control reasons does not hold OPM fully accountable for total investigative time when deficient products are delivered to its customers. In fact, restarting the time measurement for reworked investigations could positively affect OPM’s statistics if the reworked sections of the investigation take less time than did the earlier effort to complete the large portion of the investigative report. Information-technology-related problems are another area where needless delays are being experienced. Failure to fully utilize e-QIP adds about 2 weeks to the application-submission time, and the government must pay to have information key-entered into OPM’s investigative database. Likewise, an estimated 2 to 3 weeks are added to the adjudication phase because of the need to print and ship investigative reports to DISCO and other adjudication facilities. These and other reasons for delays show the fragmented approach that the government has taken to addressing the clearance problems. In November 2005, we were optimistic that the government plan for improving the clearance process prepared under the direction of OMB’s Deputy Director for Management would be a living document that would provide the strategic vision for correcting long- standing problems in the personnel security clearance process. However, OPM recently told us that the plan has not been modified in the 9 months since we labeled it as an important step forward but identified numerous shortcomings that should be addressed to make it a more powerful vehicle for change. While eliminating delays in the clearance process is an important goal, the government cannot afford to achieve that goal by providing investigative and adjudicative reports that are incomplete in key areas required by federal investigative standards and adjudicative guidelines. The lack of full reciprocity of clearances is an outgrowth of agencies’ concerns that other agencies may have granted clearances based on inadequate investigations and adjudications. OMB’s Deputy Director of Management has convened an interagency committee to address this problem and has taken steps to move agencies toward greater reciprocity. The findings in this report may suggest to some security managers that there is at least some evidence to support agencies’ concerns about the risks that may come from accepting the clearances issued by other federal agencies. Readers are reminded, however, that our review and the analyses presented here looked at only one aspect of quality—completeness of reports. We could not address whether the information contained in the investigative reports we reviewed was adequate for determining clearance eligibility and whether all 50 of the industry personnel should have been granted clearances. Such judgments are best left to fully trained, experienced adjudicators. Still, our findings do raise questions about (1) the adequacy of the procedures that OPM used to ensure quality before sending its investigative reports to its customers and (2) DISCO’s procedures for reviewing the quality of the clearance determinations made by its adjudicators when information was missing from the investigative reports or decisions were not fully documented in JPAS. Furthermore, as we pointed out in November 2005, the almost total absence of quality metrics in the governmentwide plan for improving the clearance process hinders Congress’s oversight of these important issues. Finally, the missing documentation could have longer term negative effects such as requiring future investigators and adjudicators to obtain the documentation missing from current reviews when it is time to update the clearances currently being issued. To improve the timeliness of the processes used to determine whether or not industry personnel are eligible for a top secret clearance, we are making the following recommendations to the Director of the Office of Management and Budget to direct the Deputy Director for Management, in his oversight role of the governmentwide clearance process, to take the following actions: Direct OPM and DOD to fully measure and report all of the time that transpires between when the application is initially received by the federal government to when the clearance-eligibility determination has been provided to the customer. Inherent in this recommendation to increase transparency is the need to provide all stakeholders (including facility security officers, federal and contract investigators, and adjudicators) information about each of their steps within the clearances phases so that each can develop goals and implement actions to minimize delays. Establish an interagency working group to identify and implement solutions for investigative and adjudicative information technology problems—such as some parts of DOD continuing to submit paper copies of the clearance application, or inefficiencies—such as the continued distribution of paper investigative reports—that have resulted in clearance delays. To improve the completeness of the documentation for the processes used to determine whether or not industry personnel are eligible for a top secret clearance and to decrease future concerns about the reciprocal acceptance of clearances issued by other agencies, we are recommending that the Director of the Office of Management and Budget direct the Deputy Director for Management, in his oversight role of the governmentwide clearance process, to take the following actions: Require OPM and DOD to (1) submit to the Deputy Director their procedures for eliminating the deficiencies that we identified in their investigative and adjudicative documentation and (2) develop and report metrics on completeness and other measures of quality that will address the effectiveness of the new procedures. Update the government strategic plan for improving the clearance process to address, among other things, the weaknesses that we identified in the November 2005 version of the plan as well as the timeliness and incompleteness issues identified in this report, and widely distribute it so that all stakeholders can work toward the goals that they can positively impact. Issue guidance that clarifies when, if ever, adjudicators may use incomplete investigative reports—closed pending and inadvertently incomplete cases—as the basis for granting clearance eligibility. We received agency comments from OMB, DOD, and OPM (see apps. III, IV, and V, respectively). In addition, OMB and OPM provided separate technical comments which we incorporated in the final report, as appropriate. In his comments to our report, OMB’s Deputy Director for Management did not take exception to any of our recommendations. Among other things, he noted his agreement with our report’s conclusion that agencies must identify and implement new investigative and adjudicative solutions to improve the quality and timeliness of background investigations. The Deputy Director stated that National Security Council’s Security Clearance Working Group had begun to explore ways to identify and implement such improvements. He also said that the quality of the investigations and adjudications are of paramount concern and that he would ask the National Security Council’s Personnel Security Working Group to determine when, if ever, an adjudicator may use incomplete investigative reports to determine whether to grant a security clearance. Although our recommendations were not directed to DOD, the Deputy Under Secretary of Defense (Counterintelligence and Security) noted his concurrence with our recommendations. The Deputy Under Secretary also discussed the use of incomplete cases as the basis for adjudication. He maintained that when the unresolved issues appear to be of minor importance, a risk management adjudication may be prudent. After noting that patchwork fixes will not solve the fundamental problem—”the current process takes too long, costs too much, and leaves us with a product of uncertain quality”—the Deputy Under Secretary reported that DOD is working on a new process. In her written comments, OPM’s Director stated that she fully supported the intent of our report but expressed concern that we had based our findings upon a number of inaccurate facts. We disagree. To address the Director’s concerns, we grouped her concerns into four general categories as discussed below. The Director stated that a fair comparison cannot be made between PIPS (OPM’s investigative database) and JPAS (DOD’s clearance database that also includes investigative timeliness data). As our scope and methodology section makes clear, we did report information from OPM and DOD databases, but the focus of our report was not a comparison of databases. While we did present timeliness findings based on the two databases, we did not perform comparisons—a condition that would have required us to report statistics on the same population for the same time period. Instead, our draft report clearly noted when we were supplementing our findings from the DOD database with more recent statistics from OPM. We also noted that the OPM findings were governmentwide. Therefore, we are puzzled by the Director’s comment since we supplied the additional OPM- provided statistics in our efforts to present a balanced view and reflect OPM’s statements that investigation timeliness had improved. The Director’s later statement that a fair comparison cannot be made between the data in the two systems is troubling because underpinning effective oversight is the prerequisite for reliable data. Regardless of whose data are used, the two databases should produce timeliness statistics that agree and cover the full periods that IRTPA require to be monitored: total clearance process, investigations, and adjudications. The Director took exception to our report’s assertion that stakeholders and decision makers are limited in their ability to address delays in the security clearance process because of incomplete timeliness data. She stated that OPM feels stakeholders and decision makers have the most comprehensive data possible to understand and address the delays in the security clearance process. At the same time, other parts of her comments noted deficiencies in OPM’s timeliness data. For example, she noted that OPM does “not account for the timeliness of multiple submissions” [of applications], and that OPM only measures “timeliness from beginning to the point where OPM has completed all items under our direct control via the Closed Pending process.” We stand behind our assertion that OPM has incomplete timeliness data, and we believe the Director’s admissions about the limitations of the OPM data reinforce the empirical basis of our assertion. The evidence supplied in our draft report further contradicts the Director’s assertion that stakeholders and decision makers have the most comprehensive data possible. Our approach for investigating timeliness and completeness is fully described in our scope and methodology, including the specific steps that we took. For example, we sent written questions and verbally inquired with OPM staff about whether OPM tracked timeliness for certain situations, and the staff’s written and verbal answers to those questions indicated that the agency does not measure the timeliness of situations such as multiple submissions and the full period required to conduct an investigation when the investigative report is returned because of quality problems. IRTPA did not identify situations that could be excluded from mandated timeliness assessments. Therefore, we stand by our conclusion that without fully accounting for the total time needed to complete the clearance process, OMB and Congress will not be able to accurately determine whether agencies have met future IRTPA requirements. Concerning our findings that initial clearances took 446 days and clearance updates took 545 days, the director noted that a sample of current cases would likely show a marked improvement in consistency and would reflect the many process improvements that have been put in place since the time of transfer. Also, she indicated that some of the problems that we reported were the result of transferred staff and cases. We agree that it is possible that different findings might be obtained if a more recent population were examined today. However, the population that we examined represented the most up-to-date information available when we began our timeliness analyses. With regard to the Director’s statement that some of the problems were caused by the transfer of investigative functions and personnel from DOD, OPM had 2 years to prepare for the transfer between the announced transfer agreement in February 2003 and its occurrence in February 2005. In addition, 47 of the 50 investigative reports that we reviewed were missing documentation even though OPM has quality control procedures for reviewing the reports before they are sent to DOD. Lastly, the Director indicated that our report discounts the government’s efforts to correct clearance problems, like the impact of IRTPA and the government’s Plan for Improving the Personnel Security Clearance Process. In addition, the Director wrote that the draft report did not address the effects of the backlog and agencies’ inaccurate projections of investigations workload. To the contrary, our draft report discussed each of these issues and we believe the report presents a balanced assessment of programs—identifying problems, discussing ongoing efforts to correct situations, and helping the reader understand the context within which a program functions. For example, the introduction discussed IRTPA, the development of the plan, additional actions that were coordinated through OMB’s Deputy Director for Management, and the transfer of DOD’s investigative function to OPM. Similarly, we noted in the investigation- completeness section that OPM has increased its investigative workforce in recent years. Our draft report also identified both concerns as factors that impede the speed with which OPM can deliver investigative reports. After careful consideration of the OPM Director’s concerns, we continue to believe our findings and conclusions have merit. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies of this report to interested congressional members: the Director of the Office of Management and Budget; the Secretary of Defense; and the Director of the Office of Personnel Management. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or other members have any additional questions about DOD’s personnel security program, please contact me at (202) 512-5559 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this correspondence. GAO staff who made major contributions to the correspondence are listed in appendix VI. The scope of our work emphasized the analysis of information on top secret clearances for industry personnel. Earlier in this report, table 1 showed that all of the investigative information needed to determine eligibility for a secret or confidential clearance is also required as part of the investigative report considered when determining eligibility for a top secret clearance. In addition, examining the timeliness and completeness of documentation for top secret clearances focused our efforts on a level of clearance where greater damage could occur through the unauthorized disclosure of classified information. Our examination of clearance information for industry personnel continued a line of research discussed in our report issued in May 2004. With about 34 percent of its 2.5 million clearances held by industry personnel who are performing contract work for the Department of Defense (DOD) and 23 other agencies, this segment of the workforce is playing an increasingly larger role in national security. To examine the timeliness of the processes used to determine whether or not industry personnel are eligible for a top secret clearance, we reviewed various documents, including laws and executive orders, DOD security clearance policies, Office of Personnel Management (OPM) policies, and the government plan for improving the security clearance process. These sources provided the criteria that we used in our analyses, as well as insights into possible causes for and effects of the delays in obtaining timely clearances. We also reviewed clearance-related reports issued by organizations such as GAO, DOD’s Office of Inspector General, and DOD’s Personnel Security Research Center. We interviewed headquarters policy and program officials from DOD’s Office of the Under Secretary of Defense for Intelligence and OPM and obtained and evaluated additional documentation from those officials. In addition, representatives from the organizations shown in table 4 provided additional interview and documentary evidence that we also evaluated. A major focus of our timeliness examination included our analysis of computerized data abstracted from the Joint Personnel Adjudications System (JPAS) and statistical reports on timeliness that OPM produced for DOD. We calculated the number of days required for each case for three phases of the process and the total process. Missing dates for the start or completion dates for a phase prevented the calculation for some cases. Also, we eliminated some dates for the phases when the start date was chronologically later than the end date. As a result, the number of applicable cases varies for each calculation. The abstract was for the population of 1,685 industry personnel granted initial top secret clearances and 574 industry personnel granted top secret clearance updates by the Defense Industrial Security Clearance Office (DISCO) during January and February 2006. The application-submission and investigation phases of the clearance process for those 2,259 industry personnel were started at various times prior to the final adjudication determinations. We assessed the reliability of the JPAS data by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. While we found problems with the accuracy of some of the JPAS data, we determined they were sufficiently reliable for selecting a sample of cases for our review and for calculating average days for the clearance process. DOD and OPM also provided timeliness statistics for other time periods, levels of clearances, types of personnel, and other federal agencies to provide us with a broader context to interpret the timeliness statistics that we extracted from the DISCO database abstract. To examine the completeness of the documentation of the processes used to determine whether or not industry personnel are eligible for a top secret clearance, we used the sources identified above to answer the timeliness question concerning: laws, executive orders, policies, reports, and materials and testimonial evidence provided by the organizations listed in table 4. The sources and materials provided us with an understanding of the criteria for evaluating whether prescribed information was present in or absent from investigative and adjudicative reports used in the clearance process. Members of the GAO team attended OPM’s basic special agent training course for 3 weeks to gain a greater understanding of investigative procedures and requirements and participated in the Defense Security Service Academy’s online basic adjudicator training to learn more about adjudicative procedures and requirements. Following the training, we began a multiple-step process to review and analyze the investigative and adjudicative documentation associated with DISCO determinations of clearance eligibility for industry security clearance cases. We started by randomly selecting 50 cases from the population of 1,685 initial clearance applications adjudicated by DISCO during January and February 2006. Once our sample was selected, we obtained paper copies of the completely adjudicated case files. We developed a data collection instrument that incorporated information from sources such as the federal investigative standards and adjudicative guidelines, OPM’s Investigator’s Handbook (Draft Version 5), and DOD’s Personnel Security Research Center’s Quality Rating Form—an analysis tool to help DOD adjudicators assess the quality of investigative reports used to make adjudication decisions. Our staff who developed the instrument then trained other members of our team on how to use the instrument in order to ensure the accuracy and consistency of data entry. We refined our instrument utilizing feedback from DOD’s Personnel Security Research Center staff and our pretest of the instrument on cases not included in our sample of 50 cases. To ensure the accuracy of our work, a second team member independently verified information that another team member had initially coded. As part of each review, we examined each report of investigation to ensure that all of the investigative requirements had been met (e.g., neighborhood reference checks) and to determine if issues that were raised as part of the investigation had been resolved by OPM investigators. After a thorough review of the investigative report and associated materials, we reviewed the JPAS adjudicative report. The JPAS report showed the final adjudicative decision, including any guidelines that were applied and any mitigating information. Our assessment of each case was entered into an electronic database and analyzed to determine the completeness of the files and to identify areas of deficiency. In addition to obtaining statistical findings, we identified 8 cases that best illustrated several types of deficiencies identified by our reviews and statistical analyses. We then met with investigations and adjudications experts from the Defense Security Service Academy to discuss several cases. We also discussed our findings for each of the 8 cases with investigative experts from OPM’s Quality Management group and adjudication experts from DISCO. By discussing the issues contained in each case with OPM and DOD experts, we were able to learn more about the causes of the incomplete documentation and confirm the accuracy of our observations on 16 percent of our sampled cases. We performed our work from September 2005 through August 2006 in accordance with generally accepted government auditing standards. In making determinations of eligibility for security clearances, the federal guidelines require adjudicators to consider (1) guidelines covering 13 specific areas, (2) adverse conditions or conduct that could raise a security concern and factors that might mitigate (alleviate) the condition for each guideline, and (3) general factors related to the whole person. First, the guidelines state that clearance decisions require a common-sense determination of eligibility for access to classified information based upon careful consideration of the following 13 areas: allegiance to the United States; foreign influence, such as having a family member who is a citizen of a foreign country; foreign preference, such as performing military service for a foreign country; personal conduct, such as deliberately concealing or falsifying relevant facts when completing a security questionnaire; emotional, mental, and personality disorders; outside activities, such as providing service to or being employed by a misuse of information technology systems. Second, for each of these 13 areas, the guidelines specify (1) numerous significant adverse conditions or conduct that could raise a security concern that may disqualify an individual from obtaining a security clearance and (2) mitigating factors that could allay those security concerns, even when serious, and permit granting a clearance. For example, the financial consideration guideline states that individuals could be denied security clearances based on having a history of not meeting financial obligations. However, this adverse condition could be set aside (referred to as mitigated) if one or more of the following factors were present: the financial condition was not recent, resulted from factors largely beyond the person’s control (e.g., loss of employment), or was addressed through counseling. Third, the adjudicator should evaluate the relevance of an individual’s overall conduct by considering the following general factors: the nature, extent, and seriousness of the conduct; the circumstances surrounding the conduct, to include knowledgeable participation; the frequency and recency of the conduct; the individual’s age and maturity at the time of the conduct; the voluntariness of participation; the presence or absence of rehabilitation and other pertinent behavioral changes; the motivation for the conduct; the potential for pressure, coercion, exploitation, or duress; and the likelihood of continuation or recurrence. When the personnel security investigation uncovers no adverse security conditions, the adjudicator’s task is fairly straightforward because there is no security condition to mitigate. In addition to the contact above, Jack E. Edwards, Assistant Director; Jim D. Ashley; Jerome A. Brown; Kurt A. Burgeson; Susan C. Ditto; David S. Epstein; Cindy K. Gilbert; Cynthia L. Grant; Sara G. Hackley; James P. Klein; Ron La Due Lake; Kenneth E. Patton; and Jennifer L. Young made key contributions to this report. DOD Personnel Clearances: Questions and Answers for the Record Following the Second in a Series of Hearings on Fixing the Security Clearance Process. GAO-06-693R. Washington, D.C.: June 14, 2006. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Funding Challenges and Other Impediments Slow Clearances for Industry Personnel. GAO-06-747T. Washington, D.C.: May 17, 2006. Managing Sensitive Information: DOE and DOD Could Improve Their Policies and Oversight. GAO-06-531T. Washington, D.C.: March 14, 2006. GAO’s High-Risk Program. GAO-06-497T. Washington, D.C.: March 15, 2006. Managing Sensitive Information: Departments of Energy and Defense Policies and Oversight Could Be Improved. GAO-06-369. Washington, D.C.: March 7, 2006. Questions for the Record Related to DOD’s Personnel Security Clearance Program and the Government Plan for Improving the Clearance Process. GAO-06-323R. Washington, D.C.: January 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD’s Program, But Concerns Remain. GAO-06- 233T. Washington, D.C.: November 9, 2005. Defense Management: Better Review Needed of Program Protection Issues Associated with Manufacturing Presidential Helicopters. GAO-06- 71SU. Washington, D.C.: November 4, 2005. Questions for the Record Related to DOD’s Personnel Security Clearance Program. GAO-05-988R. Washington, D.C.: August 19, 2005. Industrial Security: DOD Cannot Ensure Its Oversight of Contractors under Foreign Influence Is Sufficient. GAO-05-681. Washington, D.C.: July 15, 2005. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO’s High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. Defense Management: Key Elements Needed to Successfully Transform DOD Business Operations. GAO-05-629T. Washington, D.C.: April 28, 2005. Maritime Security: New Structures Have Improved Information Sharing, but Security Clearance Processing Requires Further Attention. GAO-05-394. Washington, D.C.: April 15, 2005. DOD’s High-Risk Areas: Successful Business Transformation Requires Sound Strategic Planning and Sustained Leadership. GAO-05-520T. Washington, D.C.: April 13, 2005. GAO’s 2005 High-Risk Update. GAO-05-350T. Washington, D.C.: February 17, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. Intelligence Reform: Human Capital Considerations Critical to 9/11 Commission’s Proposed Reforms. GAO-04-1084T. Washington, D.C.: September 14, 2004. DOD Personnel Clearances: Additional Steps Can Be Taken to Reduce Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-632. Washington, D.C.: May 26, 2004. DOD Personnel Clearances: Preliminary Observations Related to Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-202T. Washington, D.C.: May 6, 2004. Security Clearances: FBI Has Enhanced Its Process for State and Local Law Enforcement Officials. GAO-04-596. Washington, D.C.: April 30, 2004. Industrial Security: DOD Cannot Provide Adequate Assurances That Its Oversight Ensures the Protection of Classified Information. GAO-04-332. Washington, D.C.: March 3, 2004. DOD Personnel Clearances: DOD Needs to Overcome Impediments to Eliminating Backlog and Determining Its Size. GAO-04-344. Washington, D.C.: February 9, 2004. Aviation Security: Federal Air Marshal Service Is Addressing Challenges of Its Expanded Mission and Workforce, but Additional Actions Needed. GAO-04-242. Washington, D.C.: November 19, 2003. Results-Oriented Cultures: Creating a Clear Linkage between Individual Performance and Organizational Success. GAO-03-488. Washington, D.C.: March 14, 2003. Defense Acquisitions: Steps Needed to Ensure Interoperability of Systems That Process Intelligence Data. GAO-03-329. Washington D.C.: March 31, 2003. Managing for Results: Agency Progress in Linking Performance Plans With Budgets and Financial Statements. GAO-02-236. Washington D.C.: January 4, 2002. Central Intelligence Agency: Observations on GAO Access to Information on CIA Programs and Activities. GAO-01-975T. Washington, D.C.: July 18, 2001. Determining Performance and Accountability Challenges and High Risks. GAO-01-159SP. Washington, D.C.: November 2000. DOD Personnel: More Consistency Needed in Determining Eligibility for Top Secret Security Clearances. GAO-01-465. Washington, D.C.: April 18, 2001. DOD Personnel: More Accurate Estimate of Overdue Security Clearance Reinvestigations Is Needed. GAO/T-NSIAD-00-246. Washington, D.C.: September 20, 2000. DOD Personnel: More Actions Needed to Address Backlog of Security Clearance Reinvestigations. GAO/NSIAD-00-215. Washington, D.C.: August 24, 2000. Security Protection: Standardization Issues Regarding Protection of Executive Branch Officials. GAO/T-GGD/OSI-00-177. Washington, D.C.: July 27, 2000. Security Protection: Standardization Issues Regarding Protection of Executive Branch Officials. GAO/GGD/OSI-00-139. Washington, D.C.: July 11, 2000. Computer Security: FAA Is Addressing Personnel Weaknesses, But Further Action Is Required. GAO/AIMD-00-169. Washington, D.C.: May 31, 2000. DOD Personnel: Weaknesses in Security Investigation Program Are Being Addressed. GAO/T-NSIAD-00-148. Washington, D.C.: April 6, 2000. DOD Personnel: Inadequate Personnel Security Investigations Pose National Security Risks. GAO/T-NSIAD-00-65. Washington, D.C.: February 16, 2000. DOD Personnel: Inadequate Personnel Security Investigations Pose National Security Risks. GAO/NSIAD-00-12. Washington, D.C.: October 27, 1999. Background Investigations: Program Deficiencies May Lead DEA to Relinquish Its Authority to OPM. GAO/GGD-99-173. Washington, D.C.: September 7, 1999. Department of Energy: Key Factors Underlying Security Problems at DOE Facilities. GAO/T-RCED-99-159. Washington, D.C.: April 20, 1999. Performance Budgeting: Initial Experiences Under the Results Act in Linking Plans With Budgets. GAO/AIMD/GGD-99-67. Washington, D.C.: April 12, 1999. Military Recruiting: New Initiatives Could Improve Criminal History Screening. GAO/NSIAD-99-53. Washington, D.C.: February 23, 1999. Executive Office of the President: Procedures for Acquiring Access to and Safeguarding Intelligence Information. GAO/NSIAD-98-245. Washington, D.C.: September 30, 1998. Inspectors General: Joint Investigation of Personnel Actions Regarding a Former Defense Employee. GAO/AIMD/OSI-97-81R. Washington, D.C.: July 10, 1997. Privatization of OPM’s Investigations Service. GAO/GGD-96-97R. Washington, D.C.: August 22, 1996. Cost Analysis: Privatizing OPM Investigations. GAO/GGD-96-121R. Washington, D.C.: July 5, 1996. Personnel Security: Pass and Security Clearance Data for the Executive Office of the President. GAO/NSIAD-96-20. Washington, D.C.: October 19, 1995. Privatizing OPM Investigations: Implementation Issues. GAO/T-GGD- 95-186. Washington, D.C.: June 15, 1995. Privatizing OPM Investigations: Perspectives on OPM’s Role in Background Investigations. GAO/T-GGD-95-185. Washington, D.C.: June 14, 1995. Security Clearances: Consideration of Sexual Orientation in the Clearance Process. GAO/NSIAD-95-21. Washington, D.C.: March 24, 1995. Background Investigations: Impediments to Consolidating Investigations and Adjudicative Functions. GAO/NSIAD-95-101. Washington, D.C.: March 24, 1995. Managing DOE: Further Review Needed of Suspensions of Security Clearances for Minority Employees. GAO/RCED-95-15. Washington, D.C.: December 8, 1994. Personnel Security Investigations. GAO/NSIAD-94-135R. Washington, D.C.: March 4, 1994. Classified Information: Costs of Protection Are Integrated With Other Security Costs. GAO/NSIAD-94-55. Washington, D.C.: October 20, 1993. Nuclear Security: DOE’s Progress on Reducing Its Security Clearance Work Load. GAO/RCED-93-183. Washington, D.C.: August 12, 1993. Personnel Security: Efforts by DOD and DOE to Eliminate Duplicative Background Investigations. GAO/RCED-93-23. Washington, D.C.: May 10, 1993. Administrative Due Process: Denials and Revocations of Security Clearances and Access to Special Programs. GAO/T-NSIAD-93-14. Washington, D.C.: May 5, 1993. DOD Special Access Programs: Administrative Due Process Not Provided When Access Is Denied or Revoked. GAO/NSIAD-93-162. Washington, D.C.: May 5, 1993. Security Clearances: Due Process for Denials and Revocations by Defense, Energy, and State. GAO/NSIAD-92-99. Washington, D.C.: May 6, 1992. Due Process: Procedures for Unfavorable Suitability and Security Clearance Actions. GAO/NSIAD-90-97FS. Washington, D.C.: April 23, 1990. Weaknesses in NRC’s Security Clearance Program. GAO/T-RCED-89-14. Washington, D.C.: March 15, 1989. Nuclear Regulation: NRC’s Security Clearance Program Can Be Strengthened. GAO/RCED-89-41. Washington, D.C.: December 20, 1988. Nuclear Security: DOE Actions to Improve the Personnel Clearance Program. GAO/RCED-89-34. Washington, D.C.: November 9, 1988. Nuclear Security: DOE Needs a More Accurate and Efficient Security Clearance Program. GAO/RCED-88-28. Washington, D.C.: December 29, 1987. National Security: DOD Clearance Reduction and Related Issues. GAO/NSIAD-87-170BR. Washington, D.C.: September 18, 1987. Oil Reserves: Proposed DOE Legislation for Firearm and Arrest Authority Has Merit. GAO/RCED-87-178. Washington, D.C.: August 11, 1987. Embassy Blueprints: Controlling Blueprints and Selecting Contractors for Construction Abroad. GAO/NSIAD-87-83. Washington, D.C.: April 14, 1987. Security Clearance Reinvestigations of Employees Has Not Been Timely at the Department of Energy. GAO/T-RCED-87-14. Washington, D.C.: April 9, 1987. Improvements Needed in the Government’s Personnel Security Clearance Program. Washington, D.C.: April 16, 1985. Need for Central Adjudication Facility for Security Clearances for Navy Personnel. GAO/GGD-83-66. Washington, D.C.: May 18, 1983. Effect of National Security Decision Directive 84, Safeguarding National Security Information. GAO/NSIAD-84-26. Washington, D.C.: October 18, 1983. Faster Processing of DOD Personnel Security Clearances Could Avoid Millions in Losses. GAO/GGD-81-105. Washington, D.C.: September 15, 1981. Lack of Action on Proposals To Resolve Longstanding Problems in Investigations of Federal Employees. FPCD-79-92. Washington, D.C.: September 25, 1979. Costs of Federal Personnel Security Investigations Could and Should Be Cut. FPCD-79-79. Washington, D.C.: August 31, 1979. Proposals to Resolve Longstanding Problems in Investigations of Federal Employees. FPCD-77-64. Washington, D.C.: December 16, 1977. Personnel Security Investigations: Inconsistent Standards and Procedures. B-132376. Washington, D.C.: December 2, 1974. | The damage that unauthorized disclosure of classified information can cause to national security necessitates the prompt and careful consideration of who is granted a security clearance. However, long-standing delays and other problems with DOD's clearance program led GAO to designate it a high-risk area in January 2005. DOD transferred its investigations functions to the Office of Personnel Management (OPM) in February 2005. The Office of Management and Budget's (OMB) Deputy Director for Management is coordinating governmentwide efforts to improve the clearance process. Congress asked GAO to examine the clearance process for industry personnel. This report addresses the timeliness of the process and completeness of documentation used to determine the eligibility of industry personnel for top secret clearances. To assess timeliness, GAO examined 2,259 cases of personnel granted top secret eligibility in January and February 2006. For the completeness review, GAO compared documentation in 50 randomly sampled initial clearances against federal standards. GAO's analysis of timeliness data showed that industry personnel contracted to work for the federal government waited more than one year on average to receive top secret clearances, longer than OPM-produced statistics would suggest. GAO's analysis of 2,259 cases in its population showed the process took an average of 446 days for initial clearances and 545 days for clearance updates. While OMB has a goal for the application-submission phase of the process to take 14 days or less, it took an average of 111 days. In addition, GAO's analyses showed that OPM used an average of 286 days to complete initial investigations for top secret clearances, well in excess of the 180-day goal specified in the plan that OMB and others developed for improving the clearance process. Finally, the average time for adjudication (determination of clearance eligibility) was 39 days, compared to the 30-day requirement that starts in December 2006. An inexperienced investigative workforce, not fully using technology, and other causes underlie these delays. Delays may increase costs for contracts and risks to national security. In addition, statistics from OPM, the agency with day-to-day responsibility for tracking investigations and adjudications, underrepresent the time used in the process. For example, the measurement of time does not start immediately upon the applicant's submission of a request for clearance. Not fully accounting for all the time used in the process hinders congressional oversight of the efforts to address the delays. OPM provided incomplete investigative reports to DOD, and DOD personnel who review the reports to determine a person's eligibility to hold a clearance (adjudicators) granted eligibility for industry personnel whose investigative reports contained unresolved issues, such as unexplained affluence and potential foreign influence. In its review of 50 investigative reports for initial clearances, GAO found that that almost all (47 of 50) cases were missing documentation required by federal investigative standards. At least half of the reports did not contain the required documentation in three investigative areas: residence, employment, or education. Moreover, federal standards indicate expansion of investigations may be necessary to resolve issues, but GAO found at least one unresolved issue in 27 of the reports. We also found that the DOD adjudicators granted top secret clearance eligibility for all 27 industry personnel whose investigative reports contained unresolved issues without requesting additional information or documenting that the information was missing in the adjudicative report. In its November 2005 assessment of the government plan for improving the clearance process, GAO raised concerns about the limited attention devoted to assessing quality in the clearance process, but the plan has not been revised to address the shortcomings GAO identified. The use of incomplete investigations and adjudications in granting top secret clearance eligibility increases the risk of unauthorized disclosure of classified information. Also, it could negatively affect efforts to promote reciprocity (an agency's acceptance of a clearance issued by another agency) being developed by an interagency working group headed by OMB's Deputy Director. |
According to our Standards for Internal Control in the Federal Government, transactions and other significant events should be authorized and executed only by persons acting within the scope of their authority. Although review of transactions by persons in authority is the principal means of assuring that transactions are valid, we found that the review and approval process for purchase card purchases was inadequate in all the agencies reviewed. At the Department of Education, we found that 10 of its 14 offices did not require cardholders to obtain authorization prior to making some or all purchases, although Education’s policy required that all requests to purchase items over $1,000 be made in writing to the applicable department executive officer. We also found that approving officials did not use monitoring reports that were available from Bank of America to identify unusual or unauthorized purchases. Additionally, Education’s 1990 purchase card policy, which was in effect during the time of our review (May 1998 through September 2000), stated that an approving official was to ensure that all purchase card transactions were for authorized Education purchases and in accordance with departmental and other federal regulations. The approving official signified that a cardholder’s purchases were appropriate by reviewing and signing monthly statements. To test the effectiveness of Education’s approving officials’ review, we analyzed 5 months of cardholder statements and found that 37 percent of the 903 monthly cardholder statements we reviewed were not approved by the appropriate official. The unapproved statements totaled about $1.8 million. Further, we found that Education employees purchased computers using their purchase cards, which was a violation of Education’s policy prohibiting the use of purchase cards for this purpose. As I will discuss later, several of the computers that were purchased with purchase cards were not entered in property records, and we could not locate them. If approving officials had been conducting a proper review of monthly statements, the computer purchases could have been identified and the practice halted, perhaps eliminating this computer accountability problem. Education implemented a new approval process during our review. We assessed this new process and found that while approving officials were generally reviewing cardholder statements, those officials were not ensuring that adequate supporting documentation existed for all purchases. Weaknesses in the approval process also existed at the two Navy units we reviewed. During our initial review, approving officials in these two units told us that they did not review support for transactions before certifying monthly statements for payment because (1) they did not have time and (2) Navy policy did not specifically require that approving officials review support. At one of the Navy units, one approving official was responsible for certifying summary billing statements covering an average of over 700 monthly statements for 1,153 cardholders. Further, Navy’s policy allows the approving official to presume that all transactions are proper unless notified to the contrary by the cardholder. The policy appears to improperly assign certifying officer accountability to cardholders and is inconsistent with Department of Defense regulations, which state that certifying officers are responsible for assuring that payments are proper. During our follow-up review, we found that throughout fiscal year 2001, approving officials in the two units still did not properly review and certify the monthly purchase card statements for payment. Although the Department of Defense Purchase Card Program Management Office issued new guidance in July 2001 that would reduce the number of cardholders for which each approving official was responsible, neither of the two units met the suggested ratio of five to seven cardholders to one approving official until well after the start of fiscal year 2002. Further, the Department of Defense agreed with our recommendation that Navy revise its policy to assure that approving officials review the monthly statements and the supporting documentation prior to certifying the statements for payment. However, for the last quarter of fiscal year 2001, one of the Navy units continued to inappropriately certify purchase card statements for payment. The other unit issued local guidance that partially implements our recommendation. IGs at the Departments of Agriculture, the Interior, and Transportation also identified weaknesses in the review and approval processes at these agencies. For example, Agriculture’s IG reported that the department has not effectively implemented an oversight tool in its Purchase Card Management System (PCMS), the system that processes purchase card transactions. This tool is an alert system that monitors the database for pre-established conditions that may indicate potential abuse by cardholders. Responsible officials are to periodically access their alert messages and review the details for questionable transactions. These reviewing officials should contact cardholders, if necessary, so that cardholders can verify any discrepancies or provide any additional information in order to resolve individual alert messages. In order to close out alert messages, reviewers must change the message status to “read” and explain any necessary details to resolve the alerts. According to Agriculture’s IG, only about 29,600 out of 50,500 alerts in the database during fiscal years 1999 and 2000 had been read as of January 9, 2001, and only about 6,100 of the alerts that were read contained responses. The inconsistent use of this oversight tool means that Agriculture management has reduced assurance that errors and abuse are promptly detected and that cardholders are complying with purchase card and procurement regulations. Interior’s IG reported that it reviewed the work of 53 reviewing officials and found that 42 of them performed inadequate reviews. The IG defined an adequate review as one in which the reviewing official, on a monthly basis, reconciled invoices and receipts to the purchase card statements to ensure that all transactions were legitimate and necessary. The IG found that several reviewing officials signed off on monthly statements indicating completed reviews where supporting documentation was not available. Another common internal control weakness we identified was lack of or inadequate training related to the use of purchase cards. Our Standards for Internal Control in the Federal Government emphasize that effective management of an organization’s workforce—its human capital—is essential to achieving results and is an important part of internal control. Training is key to ensuring that the workforce has the skills necessary to achieve organizational goals. Lack of or inadequate training contributed to the weak control environments at several agencies. Navy’s policies required that all cardholders and approving officials must receive initial purchase card training and refresher training every 2 years. We determined that the two Navy units lacked documentation to demonstrate that all cardholders and approving officials had received the required training. We tested $68 million of fiscal year 2000 purchase card transactions at the two Navy units and estimated that at least $17.7 million of transactions were made by cardholders for whom there was no documented evidence they had received either the required initial training or refresher training on purchase card policies and procedures. Although we found during our follow-up work that the two Navy units had taken steps to ensure cardholders receive training and to document the training, many cardholders at one of the units still had not completed the initial training or the required refresher training. Similarly, at Education, we found that although the policy required each cardholder and approving officials to receive training on their respective responsibilities, several cardholders and at least one approving official were not trained. Interior’s IG also reported a lack of training related to the purchase card program. Specifically, the IG reported that although Interior provided training to individual cardholders, it did not design or provide training to reviewing officials. According to the IG, several reviewing officials said that they did not know how to conduct a review of purchase card transactions, nor did they understand how and why to review supporting documentation. As previously mentioned, the IG found that many reviewing officials were not performing adequate reviews. Our Standards for Internal Control in the Federal Government state that internal control should generally be designed to assure that ongoing monitoring occurs in the course of normal operations. Internal control monitoring should assess the quality of performance over time and ensure that findings of audits and other reviews are promptly resolved. Program and operational managers should monitor the effectiveness of control activities as part of their regular duties. At the two Navy units we reviewed, we found that management had not established an effective monitoring and internal audit function for the purchase card program. The policies and procedures did not require that the results of internal reviews be documented or that corrective actions be monitored to help ensure they are effectively implemented. The NAVSUP Instruction calls for semiannual reviews of purchase card programs, including adherence to internal operating procedures, applicable training requirements, micro-purchase procedures, receipt and acceptance procedures, and statement certification and prompt payment procedures. These reviews are to serve as a basis for initiating appropriate action to improve the program and correct problem areas. Our analysis of fiscal year 2000 agency program coordinator reviews at one of the Navy units showed that the reviews identified problems with about 42 percent of the monthly cardholder statements that were reviewed. The problems identified were consistent with the control weaknesses we found. Unit management considered the findings but directed that corrective actions not be implemented because of complaints about the administrative burden associated with the procedural changes that would be needed to address the review findings. These reviews generally resulted in the reviewer counseling the cardholders or in some instances, recommending that cardholders attend purchase card training. As a result, the agency program coordinator had not used the reviews to make systematic improvements in the program. During our follow-up work, we noted that this unit had recently made some efforts to implement new policies directed at improving internal review and oversight activities. However, these efforts are not yet complete. At the time of our review, Education did not have a monitoring system in place for purchase card activity. However, in December 2001, the department issued new policies and procedures that, among other things, establish a quarterly quality review of a sample of purchase card transactions to ensure compliance with key aspects of the department’s policy. Transportation’s IG reported that the Federal Aviation Administration (FAA) had not performed required internal follow-up reviews on purchase card usage since 1998. A follow-up review is to consist of an independent official (other than the approving official) reviewing a sample of purchase card transactions to determine whether purchases were authorized and that cardholders and approving officials followed policies and procedures. The types of weaknesses that I have just described create an environment where improper purchases could be made with little risk of detection. I will now provide a few examples of how employees used their purchase cards to make fraudulent, improper, abusive, and questionable purchases. We also found that property purchased with the purchase cards was not always recorded in agencies’ property records, which could have contributed to missing or stolen property. In a number of cases, the significant control weaknesses that we and the IGs identified resulted in or contributed to fraudulent, improper, abusive, and questionable purchases. We considered fraudulent purchases to be those that were unauthorized and intended for personal use. Improper purchases included those for government use that were not, or did not appear to be, for a purpose permitted by law or regulation. We defined abusive or questionable transactions as those that, while authorized, were for items purchased at an excessive cost, for a questionable government need, or both. Questionable purchases also include those for which there was insufficient documentation to determine whether they were valid. For example, at Education, we found an instance in which a cardholder made several fraudulent purchases from two Internet sites for pornographic services. The name of one of the sites—Slave Labor Productions.com—should have caused suspicion when it appeared on the employee’s monthly statement. We obtained the statements containing the charges and noted that they contained handwritten notes next to the pornography charges indicating that these were charges for transparencies and other nondescript items. According to the approving official, he was not aware of the cardholder’s day-to-day responsibilities, and therefore, could not properly review the statements. The approving official stated that the primary focus of his review was to ensure there was enough money available in that particular appropriation to pay the bill. As a result of investigations related to these pornography purchases, Education management issued a termination letter, prompting the employee to resign. We also identified questionable charges by an Education employee totaling $35,760 over several years for herself and a coworker to attend college. Some of the classes the employees took were apparently prerequisites to obtain a liberal arts degree, but were unrelated to Education’s mission. The classes included biology, music, and theology, and represented $11,700 of the $35,760. These classes costing $11,700 were improper charges. The Government Employees Training Act, 5 U.S.C. 4103 and 4107, requires that training be related to an employee’s job and prohibits expenditures to obtain a college degree unless necessitated by retention or recruitment needs, which was not the case here. We also identified as questionable purchases totaling more than $152,000 for which Education could not provide any support and did not know specifically what was purchased, why it was purchased, or whether these purchases were appropriate. The breakdown of controls at the two Navy units we reviewed made it difficult to detect and prevent fraudulent purchases made by cardholders. We identified over $11,000 of fraudulent purchases including gifts, gift certificates, and clothing from Macy’s West, Nordstrom, Mervins, Lees Men’s Wear, and Footlocker, and a computer and related equipment from Circuit City. During our follow-up work, we also identified a number of improper, questionable, and abusive purchases at the Navy units, including food for employees costing $8,500; rentals of luxury cars costing $7,028; designer and high-cost leather briefcases, totes, portfolios, day planners, palm pilot cases, wallets, and purses from Louis Vuitton and Franklin Covey costing $33,054; and questionable contractor payments totaling $164,143. The designer and high-cost leather goods from Franklin Covey included leather purses costing up to $195 each and portfolios costing up to $135 each. Many of these purchases were of a questionable government need and should have been paid for by the individual. To the extent the day planners and calendar refills were proper government purchases, they were at an excessive cost and should have been purchased from certified nonprofit agencies under a program that is intended to provide employment opportunities for thousands of people with disabilities. Circumventing the requirements to buy from these nonprofit agencies and purchasing these items from commercial vendors is not only an abuse and waste of taxpayer dollars, but shows particularly poor judgment and serious internal control weaknesses. The contractor payments in question were 75 purchase card transactions with a telecommunications contractor that appeared to be advance payments for electrical engineering services. Paying for goods and services before the government has received them (with limited exceptions) is prohibited by law and Navy purchase card procedures. Navy employees told us the purchase card was used to expedite the procurement of goods and services from the contractor because the preparation, approval, and issuance of a delivery order was too time-consuming in certain circumstances. For all 75 transactions, we found that the contractor’s estimated costs were almost always equal or close to the $2,500 micro- purchase threshold. Because we found no documentation of independent receipt and acceptance of the services provided or any documentation that the work for these charges was performed, these charges are potentially fraudulent, and we have referred them to our Office of Special Investigations for further investigation. IGs also identified fraudulent purchases. The Transportation Department’s IG reported on two cases involving employees’ fraudulent use of their purchase cards. In one case, a cardholder used a government purchase card to buy computer software and other items costing over $80,000 for a personal business. In the other case, a cardholder made numerous unauthorized charges totaling more than $58,000, including a home stereo system and a new engine for his car. Additionally, Interior’s IG identified fraudulent purchases such as payments for monthly rent and phone bills, household furnishings, jewelry, and repairs to personal vehicles. One type of improper purchase we identified is the “split purchase,” which we defined as purchases made on the same day from the same vendor that appear to circumvent single purchase limits. The Federal Acquisition Regulation prohibits splitting a transaction into more than one segment to avoid the requirement to obtain competitive bids for purchases over the $2,500 micro-purchase threshold or to avoid other established credit limits. For example, one cardholder from Education purchased two computers from the same vendor at essentially the same time. Because the total cost of these computers exceeded the cardholder’s $2,500 single purchase limit, the total of $4,184.90 was split into two purchases of $2,092.45 each. We found 27 additional purchases totaling almost $120,000 where Education employees made multiple purchases from a vendor on the same day. Similarly, our analysis of purchase card payments at the two Navy units identified a number of purchases from the same vendor on the same day. To determine whether these were, in fact, split purchases, we obtained and analyzed supporting documentation for 40 fiscal year 2000 purchases at the two Navy units. We found that in many instances, cardholders made multiple purchases from the same vendor within a few minutes or a few hours for items such as computers, computer-related equipment, and software, that involved the same, or sequential or nearly sequential purchase order and vendor invoice numbers. Based on our analysis, we concluded that 32 of the 40 purchases were split into two or more transactions to avoid the micro-purchase threshold. During our follow-up work, we found that 23 of 50 fiscal year 2001 purchases by the two Navy units were split into two or more transactions to avoid the micro-purchase threshold. Split purchases were also identified by the IGs at the Departments of Agriculture and Transportation. For example, Agriculture’s IG reported that it investigated two employees who intentionally made multiple purchases of computer equipment with the same merchant in amounts exceeding their established single purchase limits. During 3 different months, these employees purchased computer systems totaling $121,123 by structuring their individual purchases of components in amounts less than the individual single purchase limit of $2,500. In September 1999, a computer procurement totaling $47,475 was made using 20 individual purchase card transactions during a 4-day period. Other computer purchases were made in November 1999 involving 15 purchase card transactions over a 3-day period totaling $36,418 and in June 2000 involving 15 individual transactions over a 5-day period totaling $37,230. The IG reported that these procurements should have been made by a warranted contracting officer. Similarly, Transportation’s IG reported that it identified 13 transactions totaling about $106,000 that violated the department’s policies against splitting purchases. Another problem we and the IGs identified is that some property purchased with purchase cards was not entered in agency property records. According to our Standards for Internal Control in the Federal Government, an agency must establish physical control to secure and safeguard vulnerable assets. Such assets should be periodically counted and compared to control records. Recording the items purchased in property records is an important step to ensure accountability and financial control over these assets and, along with periodic inventory counts, to prevent theft or improper use of government property. At Education and the Navy units, we identified numerous purchases of computers and computer-related equipment, cameras, and palm pilots that were not recorded in property records and for which the agencies could not provide conclusive evidence that the items were in possession of the federal government. For example, the lack of controls at Education contributed to the loss of 179 pieces of computer equipment costing over $200,000. We compared serial numbers obtained from a vendor where the computers were purchased to those in the department’s asset management system and found that 384 pieces of computer equipment were not listed in the property records. We conducted an unannounced inventory to determine whether the equipment was actually missing or inadvertently omitted from the property records. We found 205 pieces of equipment. Education officials have been unable to locate the remaining 179 pieces of missing equipment. They surmised that some of these items may have been surplused; however, there is no documentation to determine whether this assertion is valid. At the Navy units, our initial analysis showed that the Navy did not record 46 of 65 sampled items in their property records. When we asked to inspect these items, the Navy units could not provide conclusive evidence that 31 of them—including laptop computers, palm pilots, and digital cameras— were in the possession of the government. For example, for 4 items, the serial numbers of the property we were shown did not match purchase or manufacturer documentation. In addition, we were told that 5 items were at other Navy locations throughout the world. Navy officials were unable to conclusively demonstrate the existence and location of these 5 items. We were unable to conclude whether any of these 31 pieces of government property were stolen, lost, or being misused. We and the IGs have made recommendations to the various agencies that, if fully implemented, will help improve internal controls over the purchase card programs so that fraudulent and improper payments can be prevented or detected in the future and vulnerable assets can be better protected. These recommendations include (1) emphasizing policies on appropriate use of the purchase card and cardholder and approving official responsibilities, (2) ensuring that approving officials are trained on how to perform their responsibilities, and (3) ensuring that approving officials review purchases and their supporting documentation before certifying the statements for payment. Agencies have taken actions to respond to the recommendations made. However, during our follow-up work at Education and the Navy units, we found that weaknesses remain that continue to leave them vulnerable to fraudulent and improper payments and lost assets. Management’s ongoing commitment to improving internal controls is necessary to minimize this vulnerability. | The use of government purchase cards has increased in recent years as agencies have sought to eliminate the bureaucracy and paperwork long associated with small purchases. At the same time, agencies need to have adequate internal controls in place to protect the government from waste, fraud, and abuse. GAO found significant internal control weaknesses in agency purchase card programs, including inadequate review and approval processes, a lack of training for both cardholders and approving officials, and poor monitoring. This lax environment allowed cardholders to make fraudulent, improper, abusive, and questionable purchases. Weak controls also resulted in lost, missing, or misused government property. |
As I mentioned earlier, as has been the case for the previous 6 fiscal years, the federal government continues to have a significant number of material weaknesses related to financial systems, fundamental recordkeeping and financial reporting, and incomplete documentation. Several of these material weaknesses (referred to hereafter as material deficiencies) resulted in conditions that continued to prevent us from forming and expressing an opinion on the U.S. government’s consolidated financial statements for the fiscal years ended September 30, 2003 and 2002. There may also be additional issues that could affect the consolidated financial statements that have not been identified. Major challenges include the federal government’s inability to properly account for and report property, plant, and equipment and inventories and related property, primarily at the Department of Defense (DOD); reasonably estimate or adequately support amounts reported for certain liabilities, such as environmental and disposal liabilities and related costs at DOD, and ensure complete and proper reporting for commitments and contingencies; support major portions of the total net cost of government operations, most notably related to DOD, and ensure that all disbursements are properly recorded; fully account for and reconcile intragovernmental activity and balances; demonstrate how net outlay amounts reported in the consolidated financial statements were related to net outlay amounts reported in the underlying federal agencies’ financial statements; and effectively prepare the federal government’s financial statements, including ensuring that the consolidated financial statements are consistent with the underlying audited agency financial statements, balanced, and in conformity with GAAP. In addition to these material deficiencies, we identified four other material weaknesses in internal control related to loans receivable and loan guarantee liabilities, improper payments, information security, and tax collection activities. The material weaknesses identified by our work are discussed in more detail in appendix III. The ability to produce the data needed to efficiently and effectively manage the day-to-day operations of the federal government and provide accountability to taxpayers and the Congress has been a long-standing challenge at most federal agencies. The results of the fiscal year 2003 assessments performed by agency inspectors general or their contract auditors under FFMIA show that these problems continue to plague the financial management systems used by most of the CFO Act agencies. While the problems are much more severe at some agencies than at others, their nature and severity indicate that overall, management at most CFO Act agencies lacks the full range of information needed for accountability, performance reporting, and decision making. These problems include nonintegrated financial systems, lack of accurate and timely recording of data, inadequate reconciliation procedures, and noncompliance with accounting standards and the U.S. Government Standard General Ledger (SGL). Agencies’ inability to meet the federal financial management systems requirements continues to be the major barrier to achieving compliance with FFMIA. Under FFMIA, CFO Act agency auditors are required to report, as part of the agencies’ financial statement audits, whether agencies’ financial management systems substantially comply with (1) federal financial management systems requirements, (2) applicable federal accounting standards, and (3) the SGL at the transaction level. As shown in figure 2, auditors most frequently reported instances of noncompliance with federal financial management systems requirements. These instances of noncompliance involved not only core financial systems, but also administrative and programmatic systems. For fiscal year 2003, auditors for 17 of the 23 CFO Act agencies reported that the agencies’ financial management systems did not comply substantially with one or more of FFMIA’s three requirements. For the remaining 6 CFO Act agencies, auditors provided negative assurance, meaning that nothing came to their attention indicating that the agencies’ financial management systems did not substantially meet FFMIA requirements. The auditors for these 6 agencies did not definitively state whether the agencies’ systems substantially complied with FFMIA requirements, as is required under the statute. DHS is not subject to the requirements of the CFO Act and, consequently, is not required to comply with FFMIA. Accordingly, DHS’s auditors did not report on DHS’s compliance with FFMIA. However, the auditors identified and reported deficiencies that related to the aforementioned three requirements of FFMIA. Federal agencies have recognized the seriousness of their financial systems weaknesses and have efforts under way to implement or upgrade their financial systems to alleviate long-standing problems. We recognize that it will take time, investment, and sustained emphasis to improve agencies’ underlying financial management systems. As I mentioned earlier, for the past 7 fiscal years, the federal government has been required to prepare, and have audited, consolidated financial statements. Successfully meeting this requirement is tightly linked to the requirements for the CFO Act agencies to also have audited financial statements. This has stimulated extensive cooperative efforts and considerable attention by agency chief financial officers, inspectors general, Treasury and OMB officials, and GAO. With the benefit of the past 7 years’ experience by the federal government in having the required financial statements subjected to audit, more intensified attention will be needed on the most serious obstacles to achieving an opinion on the U.S. government’s consolidated financial statements. Three major impediments to an opinion on the consolidated financial statements are (1) serious financial management problems at DOD, (2) the federal government’s inability to fully account for and reconcile transactions between federal government entities, and (3) the federal government’s ineffective process for preparing the consolidated financial statements. Essential to achieving an opinion on the consolidated financial statements is resolution of the serious financial management problems at DOD, which we have designated as high risk since 1995. In accordance with section 1008 of the National Defense Authorization Act for Fiscal Year 2002, DOD reported that for fiscal year 2003, it was not able to provide adequate evidence supporting material amounts in its financial statements. DOD stated that it is unable to comply with applicable financial reporting requirements for (1) property, plant, and equipment (PP&E); (2) inventory and operating materials and supplies; (3) environmental liabilities; (4) intragovernmental eliminations and related accounting adjustments; (5) disbursement activity; and (6) cost accounting by responsibility segment. Although DOD represented that the military retirement health care liability data had improved for fiscal year 2003, the cost of direct health care provided by DOD-managed military treatment facilities was a significant amount of DOD’s total recorded health care liability and was based on estimates for which adequate support was not available. Overhauling DOD’s financial management operations represents a challenge that goes far beyond financial accounting to the very fiber of DOD’s range of business operations, management information systems, and culture. As I have reported in past years, DOD’s financial management problems are pervasive, complex, long-standing, and deeply rooted in virtually all business operations throughout the department. To date, none of the military services or major DOD components has passed the test of an independent financial audit because of pervasive weaknesses in financial management systems, operations, and controls. DOD has been up front about the seriousness of these problems and the need to transform the way it does business. To address these problems, DOD has taken several positive steps in many key areas. For example, the Secretary of Defense has included improving DOD’s financial management as one of his top 10 priorities, and the department has taken a number of actions under its Business Management Modernization Program, including development in April 2003 of an initial business enterprise architecture to guide operational and technological changes. DOD is currently working to refine and implement that architecture and expects to issue new versions of it during 2004. DOD reports that it is also developing detailed financial improvement plans intended to provide disciplined leadership, identify corrective actions, implement solutions, and result in a favorable audit opinion on the fiscal year 2007 DOD-wide financial statements. But DOD still has a long way to go, and top leadership must continue to stress the importance of achieving lasting improvement that truly transforms the department’s business systems and operations. Only through major transformation, which will take time and sustained leadership from top management, will DOD be able to meet the mandate of the CFO Act and achieve the President’s Management Agenda goal of improved financial performance. OMB and Treasury require the CFOs of 35 executive departments and agencies, including the 23 CFO Act agencies, to reconcile selected intragovernmental activity and balances with their “trading partners” and to report to Treasury, the agency’s inspector general, and GAO on the extent and results of intragovernmental activity and balances reconciliation efforts. A substantial number of the agencies continue to be unable to fully perform reconciliations of intragovernmental activity and balances with their trading partners, citing reasons such as (1) trading partners not providing needed data; (2) limitations and incompatibility of agency and trading partner information systems; and (3) lack of human resources. Amounts reported for federal agency trading partners for certain intragovernmental accounts were significantly out of balance in the aggregate for both fiscal years 2003 and 2002. We reported in previous years that the heart of the intragovernmental transactions issue was that the federal government lacked clearly articulated business rules for these transactions so that they would be handled consistently by agencies. In this regard, at the start of fiscal year 2003, OMB issued business rules to transform and standardize intragovernmental ordering and billing. To address long-standing problems with intragovernmental exchange transactions between federal agencies, Treasury provided federal agencies with quarterly detailed trading partner information during fiscal year 2003 to help them better perform their trading partner reconciliations. In addition, the federal government began a three-phase Intragovernmental Transactions e-gov project to define a governmentwide data architecture and provide a single source of detailed trading partner data. Resolving the intragovernmental transactions problem, though, still remains a difficult challenge and will require a commitment by the CFO Act agencies and continued strong leadership by OMB. The federal government did not have adequate systems, controls, and procedures to ensure that the consolidated financial statements are consistent with the underlying audited agency financial statements, balanced, and in conformity with GAAP. In this regard, Treasury is developing a new system and procedures to prepare the consolidated financial statements beginning with the statements for fiscal year 2004. Treasury officials have stated that these actions are intended to, among other things, directly link information from federal agencies’ audited financial statements to amounts reported in the consolidated financial statements and resolve many of the issues we identified in the process for preparing the consolidated financial statements. Resolving issues surrounding preparing the consolidated financial statements will require continued strong leadership by Treasury management. Our nation’s large and growing long-term fiscal imbalance, which is driven largely by known demographic trends and rising health care costs— coupled with new homeland security and defense commitments—serves to sharpen the need to fundamentally review and re-examine basic federal entitlements, as well as other mandatory and discretionary spending, and tax policies. As we look ahead, our nation faces an unprecedented demographic challenge with significant implications, among them budgetary and economic. Between now and 2035, the number of people who are 65 years old or over will double, driving federal spending on the elderly to a larger and ultimately unsustainable share of the federal budget. As a result, tough choices will be required to address the resulting structural imbalance. GAO prepares long-term budget simulations that seek to illustrate the likely fiscal consequences of the coming demographics and rising health care costs. Our latest long-term budget simulations reinforce the need for change in the major cost drivers—Social Security and health care programs. As shown in figure 3, by 2040, absent reform of these entitlement programs, projected federal revenues may be adequate to pay little beyond interest on the debt. Current financial reporting does not clearly and transparently show the wide range of responsibilities, programs, and activities that may either obligate the federal government to future spending or create an expectation for such spending and provides an unrealistic and even misleading picture of the federal government’s overall performance and financial condition. Few agencies adequately show the results they are getting with the taxpayer dollars they spend. In addition, too many significant federal government commitments and obligations, such as Social Security and Medicare, are not fully and consistently disclosed in the federal government’s consolidated financial statements and budget, and current federal financial reporting standards do not require such disclosure. Figure 4 shows some selected fiscal exposures. The spectrum of these exposures ranges from covering only the explicit liabilities that are shown on the consolidated financial statements to implicit promises embedded in current policy or public expectations. These liabilities, commitments, and promises have created a fiscal imbalance that will put unprecedented strains on the nation’s spending and tax policies. Although economic growth can help, the projected fiscal gap is now so large that the federal government will not be able to simply grow its way out of the problem. Tough choices are inevitable. Particularly troubling are the many big-ticket items that taxpayers will eventually have to deal with. The federal government has pledged its support to a long list of programs and activities, including pension and health care benefits for senior citizens, medical care for veterans, and contingencies associated with various government-sponsored entities, whose claims on future spending total trillions of dollars. Despite their serious implications for future budgets, tax burdens, and spending flexibilities, these unfunded commitments get short shrift in the federal government’s current financial statements and in budgetary deliberations. The federal government’s gross debt as of September 2003 was about $7 trillion, or about $24,000 for every man, woman, and child in this country today. But that number excludes items such as the gap between promised and funded Social Security and Medicare commitments and veterans health care benefit commitments provided through the Department of Veterans Affairs. If these items are factored in, the burden for every American rises to well over $100,000. In addition, the new Medicare prescription drug benefit will add thousands more to that tab. The new drug benefit is one of the largest unfunded commitments ever undertaken by the federal government. The Trustees of the Social Security and Medicare trust funds will include an official estimate of the discounted present value cost of this new benefit over the next 75 years in their annual report, which is scheduled for issuance later this month. Preliminary estimates of its long-term cost range up to $7 trillion in discounted present value terms over a 75-year period. To put that number into perspective, it is as much as the total amount of the federal government’s gross debt outstanding as of September 30, 2003. Even before the prescription drug benefit was enacted, our long-term budget simulations showed that by 2040, the federal government may have to cut federal spending in half or double taxes to pay for the mounting cost of the government’s current unfunded commitments. Either would be devastating. Proper accounting and reporting practices are essential in the public sector. After all, the U.S. government is the largest, most diverse, most complex, and arguably the most important entity on earth today. Its services—homeland security, national defense, Social Security, mail delivery, and food inspection, to name a few—directly affect the well- being of almost every American. But sound decisions on the future direction of vital federal government programs and policies are made more difficult without timely, accurate, and useful financial and performance information. Fortunately, we are starting to see efforts to address the shortcomings in federal financial reporting. The President’s Management Agenda, which closely reflects GAO’s list of high-risk government programs, is bringing attention to troubled areas across the federal government and is taking steps to better assess the results that programs are getting with the resources they are given. The Federal Accounting Standards Advisory Board is also making progress on many key financial reporting issues. In addition to these efforts, we have published a framework for analyzing various Social Security reform proposals and will soon publish a framework for analyzing health care reform proposals. We have also helped to create a consortium of “good government” organizations to stimulate the development of a set of key national indicators to assess the United States’ overall position and progress over time and in comparison to those of other industrialized nations. Budget experts at the Congressional Budget Office (CBO) and GAO continue to encourage reforms to the federal budget process to better reflect the federal government’s commitments and signal emerging problems. Among other things, we have recommended that the federal government issue an annual report on major fiscal exposures. The President’s fiscal year 2005 budget also proposes that future President’s budgets report on any enacted legislation in the past year that worsens the unfunded obligations of programs with long-term actuarial projections, with CBO to make a similar report. Such reporting could be a good starting point. Although these are positive initial steps, much more must be done given the magnitude of the federal government’s fiscal challenge. A top-to- bottom review of government activities to ensure their relevance and fit for the 21st century and their relative priority is long overdue. As I have spoken about in the past, the federal government needs a three-pronged approach to (1) restructure existing entitlement programs, (2) reexamine the base of discretionary and other spending, and (3) review and revise the federal government’s tax policy and enforcement programs. New accounting and reporting approaches, budget control mechanisms, and metrics are needed for considering and measuring the impact of spending and tax policies and decisions over the long term. Our report on the U.S. government’s consolidated financial statements for fiscal years 2003 and 2002 highlights the need to continue addressing the federal government’s serious financial management weaknesses. With the significantly accelerated financial reporting time frame for fiscal year 2004 and beyond, it is essential that the federal government move away from the extraordinary efforts many federal agencies continue to make to prepare financial statements and toward giving prominence to strengthening the federal government’s financial systems, reporting, and controls. This is the only way the federal government can meet the end goal of making timely, accurate, and useful financial and performance information routinely available to the Congress, other policymakers, and the American public. The requirement for timely, accurate, and useful financial and performance management information is greater than ever as our nation faces major long-term fiscal challenges that will require tough choices in setting priorities and linking resources to results. The Congress and the President face the challenge of sorting out the many claims on the federal budget without the budget enforcement mechanisms or fiscal benchmarks that guided the federal government through the previous years of deficit reduction into the brief period of surplus. While a number of steps will be necessary to address this challenge, truth and transparency in federal government reporting are essential elements of any attempt to address the nation’s long-term fiscal challenges. The fiscal risks I mentioned earlier can be managed only if they are properly accounted for and publicly disclosed. A crucial first step will be to face facts and identify the significant commitments facing the federal government. If citizens and federal government officials come to understand various fiscal exposures and their potential claims on future budgets, they are more likely to insist on prudent policy choices today and sensible levels of fiscal risk in the future. In addition, new budget control mechanisms will be required, along with effective approaches to successfully engage in a fundamental review, reassessment, and reprioritization of the base of federal government programs and policies that I have recommended previously. Public officials will have more incentive to make difficult but necessary choices if the public has the facts and comes to support serious and sustained action to address the nation’s fiscal challenges. Without meaningful public debate, however, real and lasting change is unlikely. Clearly, the sooner action is taken, the easier it will be to turn things around. I believe a national education campaign and outreach effort is needed to help the public understand the nature and magnitude of the long-term financial challenge facing this nation. An informed electorate is essential for a sound democracy. Members of Generation X and Y especially need to become active in this discussion because they and their children will bear the heaviest burden if policymakers fail to act in a timely and responsible manner. We at GAO are committed to doing our part, but others also need to step up to the plate. By working together, I believe we can make a meaningful difference for our nation, fellow citizens, and future generations of Americans. In closing Mr. Chairman, I want to reiterate the value of sustained congressional interest in these issues, as demonstrated by this subcommittee’s hearings and those the former Subcommittee on Government Efficiency, Financial Management, and Intergovernmental Relations held over the past several years to oversee financial management reform. It will also be key that the appropriations, budget, authorizing, and oversight committees hold agency top leadership accountable for resolving these problems and that they support improvement efforts. For further information regarding this testimony, please contact Jeffrey C. Steinhoff, Managing Director, and Gary T. Engel, Director, Financial Management and Assurance, at (202) 512-2600. R. Navarro & Associates, Inc. R. Navarro & Associates, Inc. Primary Effects on the Fiscal Years 2003 and 2002 Consolidated Financial Statements and the Management of Government Operations Without accurate asset information, the federal government does not fully know the assets it owns and their location and condition and cannot effectively (1) safeguard assets from physical deterioration, theft, or loss, (2) account for acquisitions and disposals of such assets, (3) ensure the assets are available for use when needed, (4) prevent unnecessary storage and maintenance costs or purchase of assets already on hand, and (5) determine the full costs of programs that use these assets. Problems in accounting for liabilities affect the determination of the full cost of the federal government’s current operations and the extent of its liabilities. Also, improperly stated environmental and disposal liabilities and weak internal control supporting the process for their estimation affect the federal government’s ability to determine priorities for cleanup and disposal activities and to allow for appropriate consideration of future budgetary resources needed to carry out these activities. In addition, when disclosures of commitments and contingencies are incomplete or incorrect, reliable information is not available about the extent of the federal government’s obligations. Inaccurate cost information affects the federal government’s ability to control and reduce costs, assess performance, evaluate programs, and set fees to recover costs where required. Improperly recorded disbursements could result in misstatements in the financial statements and in certain data provided by federal agencies for inclusion in the President’s budget concerning obligations and outlays. Problems in accounting for and reconciling intragovernmental activity and balances impair the government’s ability to account for billions of dollars of transactions between governmental entities. Until the differences between the total net outlays reported in federal agencies’ Statements of Budgetary Resources and the records used by the Department of the Treasury to prepare the Statement of Changes in Cash Balance from Unified Budget and Other Activities are reconciled, the effect that these differences may have on the U.S. government’s consolidated financial statements will be unknown. Because the federal government did not have adequate systems, controls, and procedures to prepare its consolidated financial statements, the federal government’s ability to ensure that the consolidated financial statements are consistent with the underlying audited agency financial statements, balanced, and in conformity with U.S. generally accepted accounting principles was impaired. Without a systematic measurement of the extent of improper payments, federal agency management cannot determine (1) if improper payment problems exist that require corrective action, (2) mitigation strategies and the appropriate amount of investments to reduce them, and (3) the success of efforts implemented to reduce improper payments. Weaknesses in the processes and procedures for estimating credit program costs affect the government’s ability to support annual budget requests for these programs, make future budgetary decisions, manage program costs, and measure the performance of lending activities. Information security weaknesses over computerized operations are placing enormous amounts of federal assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. Primary Effects on the Fiscal Years 2003 and 2002 Consolidated Financial Statements and the Management of Government Operations Weaknesses in controls over tax collection activities continue to affect the federal government’s ability to efficiently and effectively account for and collect revenue. Additionally, weaknesses in financial reporting affect the federal government’s ability to make informed decisions about collection efforts. As a result, the federal government is vulnerable to loss of tax revenue and exposed to potentially billions of dollars in losses due to inappropriate refund disbursements. The federal government did not maintain adequate systems or have sufficient, reliable evidence to support information reported in the consolidated financial statements of the U.S. government, as described below. These material deficiencies contributed to our disclaimer of opinion on the consolidated financial statements and also constitute material weaknesses in internal control. The federal government could not satisfactorily determine that all PP&E and inventories and related property were included in the consolidated financial statements, verify that certain reported assets actually exist, or substantiate the amounts at which they were valued. Most of the PP&E and inventories and related property are the responsibility of DOD. As in past years, DOD did not maintain adequate systems or have sufficient records to provide reliable information on these assets. Other agencies, most notably the National Aeronautics and Space Administration, reported continued weaknesses in internal control procedures and processes related to PP&E. The federal government could not reasonably estimate or adequately support amounts reported for certain liabilities. For example, DOD was not able to estimate with assurance key components of its environmental and disposal liabilities. In addition, DOD could not support a significant amount of its estimated military postretirement health benefits liabilities included in federal employee and veteran benefits payable. These unsupported amounts related to the cost of direct health care provided by DOD-managed military treatment facilities. Further, the federal government could not determine whether commitments and contingencies, including those related to treaties and other international agreements entered into to further the U.S. government’s interests, were complete and properly reported. The previously discussed material deficiencies in reporting assets and liabilities, material deficiencies in financial statement preparation, as discussed below, and the lack of adequate disbursement reconciliations at certain federal agencies affect reported net costs. As a result, the federal government was unable to support significant portions of the total net cost of operations, most notably related to DOD. With respect to disbursements, DOD and certain other federal agencies did not adequately reconcile disbursement activity. For fiscal years 2003 and 2002 there were unsupported adjustments to federal agencies’ records and unreconciled disbursement activity, including unreconciled differences between federal agencies’ and Treasury’s records of disbursements, totaling billions of dollars, which could also affect the balance sheet. OMB and Treasury require the CFOs of 35 executive departments and agencies, including the 23 CFO Act agencies, to reconcile selected intragovernmental activity and balances with their “trading partners” and to report to Treasury, the agency’s inspector general, and GAO on the extent and results of intragovernmental activity and balances reconciliation efforts. A substantial number of the agencies did not fully perform the required reconciliations for fiscal years 2003 and 2002, citing reasons such as (1) trading partners not providing needed data, (2) limitations and incompatibility of agency and trading partner information systems, and (3) lack of human resources. For both of these years, amounts reported for federal agency trading partners for certain intragovernmental accounts were significantly out of balance. Treasury’s ability to eliminate certain intragovernmental activity and balances is impaired by these federal agencies’ problems in handling their intragovernmental transactions. OMB Bulletin 01-09, Form and Content of Agency Financial Statements, states that outlays in federal agencies’ Statements of Budgetary Resources (SBR) should agree with the respective agency’s net outlays reported in the budget of the U.S. government. In addition, SFFAS No. 7, Accounting for Revenue and Other Financing Sources and Concepts for Reconciling Budgetary and Financial Accounting, requires explanation of any material differences between the information required to be disclosed (including net outlays) and the amounts described as “actual” in the budget of the U.S. government. We found material differences between the total net outlays reported in selected federal agencies’ audited SBRs and the records used to prepare the Statement of Changes in Cash Balance from Unified Budget and Other Activities (Statement of Changes in Cash Balance), totaling about $140 billion and $186 billion for fiscal years 2003 and 2002, respectively. Two agencies (Treasury and the Department of Health and Human Services (HHS)) accounted for about 83 percent and 75 percent of the differences identified in fiscal years 2003 and 2002, respectively. We found that the major cause of the differences for the two agencies was the treatment of offsetting receipts. Some offsetting receipts for these two agencies had not been included in the agencies’ SBRs, which would have reduced the agencies’ net outlays and made the amounts more consistent with the records used to prepare the Statement of Changes in Cash Balance. For example, we found that HHS reported net outlays for fiscal year 2003 as $596 billion on its audited SBR, while the records that Treasury uses to prepare the Statement of Changes in Cash Balance showed $505 billion for fiscal year 2003 for this agency. Until these differences between the total net outlays reported in the federal agencies’ SBRs and the records used to prepare the Statement of Changes in Cash Balance are reconciled, the effect that these differences may have on the U.S. government’s consolidated financial statements will be unknown. OMB has stated that it plans to work with the agencies to address this issue. The federal government did not have adequate systems, controls, and procedures to ensure that the consolidated financial statements are consistent with the underlying audited agency financial statements, balanced, and in conformity with generally accepted accounting principles (GAAP). During our fiscal year 2003 audit, we found the following: The process for compiling the consolidated financial statements does not directly link information from federal agencies’ audited financial statements to amounts reported in the consolidated financial statements, and therefore does not ensure that the information in the consolidated financial statements is consistent with the underlying information in federal agencies’ audited financial statements and other financial data. Internal control weaknesses exist in Treasury’s process for preparing the consolidated financial statements, such as a lack of (1) segregation of duties and (2) appropriate documentation of certain policies and procedures for preparing the consolidated financial statements. The net position reported in the consolidated financial statements is derived by subtracting liabilities from assets, rather than through balanced accounting entries. To make the fiscal years 2003 and 2002 consolidated financial statements balance, Treasury recorded a net $24.5 billion and a net $17.1 billion decrease, respectively, to net operating cost on the Statements of Operations and Changes in Net Position, which it labeled “Unreconciled Transactions Affecting the Change in Net Position.” An additional net $11.3 billion and $12.5 billion of unreconciled transactions were recorded in the Statements of Net Cost for fiscal years 2003 and 2002, respectively. Treasury does not identify and quantify all components of these unreconciled activities, nor does Treasury perform reconciliation procedures, which would aid in understanding and controlling the net position balance as well as eliminating the unreconciled transactions associated with compiling the consolidated financial statements. Significant differences in other intragovernmental accounts, primarily related to appropriations, still remain unresolved. Intragovernmental activity and balances are “dropped” or “offset” in the preparation of the consolidated financial statements rather than eliminated through balanced accounting entries. This contributes to the federal government’s inability to determine the impact of these differences on amounts reported in the consolidated financial statements. The federal government did not have an adequate process to identify and report items needed to reconcile the operating results, which for fiscal year 2003 showed a net operating cost of $665 billion, to the budget results, which for the same period showed a unified budget deficit of $374.8 billion. The consolidated financial statements include certain financial information for the executive, legislative, and judicial branches, to the extent that federal agencies within those branches have provided Treasury such information. However, there are undetermined amounts of assets, liabilities, costs, and revenues that are not included, and the federal government did not provide evidence or disclose in the consolidated financial statements that such excluded financial information was immaterial. Treasury lacks an adequate process to ensure that the financial statements, related notes, Stewardship Information, and Supplemental Information are presented in conformity with GAAP. We found that certain financial information required by GAAP was not disclosed in the consolidated financial statements. Treasury did not provide us with documentation of its rationale for excluding this information. As a result of this and certain material deficiencies noted above, we were unable to determine if the missing information was material to the consolidated financial statements. In addition to the material deficiencies noted above, we found four other material weaknesses in internal control as of September 30, 2003: (1) several federal agencies continue to have deficiencies in the processes and procedures used to estimate the costs of their lending programs and value their related loans receivable; (2) most federal agencies have not reported the magnitude of improper payments in their programs and activities; (3) federal agencies have not yet fully institutionalized comprehensive security management programs; and (4) material internal control weaknesses and systems deficiencies continue to affect the federal government’s ability to effectively manage its tax collection activities. In general, federal agencies continue to make progress in reducing the number of material weaknesses and reportable conditions related to their lending activities. However, significant deficiencies in the processes and procedures used to estimate the costs of certain lending programs and value the related loans receivable still remain. The most notable deficiencies existed at the Small Business Administration (SBA), which, while improved from last year, continues to have a material weakness related to this area. For example, SBA did not adequately document its estimation methodologies, lacked the management controls necessary to ensure that appropriate estimates were prepared and reported based on complete and accurate data, and could not fully support the reasonableness of the costs of its lending programs and valuations of its loan portfolio. SBA’s material weakness plus deficiencies at other federal credit agencies relating to the processes and procedures for estimating credit program costs continue to adversely affect the government’s ability to support annual budget requests for these programs, make future budgetary decisions, manage program costs, and measure the performance of lending activities. Across the federal government, improper payments occur in a variety of programs and activities, including those related to health care, contract management, federal financial assistance, and tax refunds. While complete information on the magnitude of improper payments is not yet available, based on available data, OMB has estimated that improper payments exceed $35 billion annually. Many improper payments occur in federal programs that are administered by entities other than the federal government, such as states. Improper payments often result from a lack of or an inadequate system of internal controls. Although the President’s Management Agenda includes an initiative to reduce improper payments, most federal agencies have not reported the magnitude of improper payments in their programs and activities. The Improper Payments Information Act of 2002 provides for federal agencies to estimate and report on their improper payments. It requires federal agencies to (1) annually review programs and activities that they administer to identify those that may be susceptible to significant improper payments, (2) estimate improper payments in susceptible programs and activities, and (3) provide reports to the Congress that discuss the causes of improper payments identified and the status of actions to reduce them. In accordance with the legislation, OMB issued guidance for federal agencies’ use in implementing the act. Among other things, the guidance requires federal agencies to report on their improper payment-related activities in the Management Discussion and Analysis section of their annual Performance and Accountability Reports (PAR). While the act does not require such reporting by all federal agencies until fiscal year 2004, OMB required 44 programs and 14 CFO Act agencies to report improper payment information in their fiscal year 2003 PARs. Our preliminary review of the PARs found that 12 of the 14 agencies reported improper payment amounts for 27 of the 44 programs identified in the guidance. We also found that, for the programs where improper payments were identified, the reports often contained information on the causes of the payments but little information that addressed the other reporting requirements cited in the legislation. Although progress has been made, serious and widespread information security weaknesses continue to place federal assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. GAO has reported information security as a high-risk area across government since February 1997. Such information security weaknesses could result in compromising the reliability and availability of data that are recorded in or transmitted by federal financial management systems. A primary reason for these weaknesses is that federal agencies have not yet fully institutionalized comprehensive security management programs, which are critical to identifying information security weaknesses, resolving information security problems, and managing information security risks on an ongoing basis. The Congress has shown continuing interest in addressing these risks, as evidenced by recent hearings on information security and enactment of the Federal Information Security Management Act of 2002 and the Cyber Security Research and Development Act. In addition, the administration has taken important actions to improve information security, such as integrating information security into the Executive Branch Management Scorecard. Material internal control weaknesses and systems deficiencies continue to affect the federal government’s ability to effectively manage its tax collection activities. Due to errors and delays in recording activity in taxpayer accounts, taxpayers were not always credited for payments made on their taxes owed, which could result in undue taxpayer burden. In addition, the federal government did not always follow up on potential unreported or underreported taxes and did not always pursue collection efforts against taxpayers owing taxes to the federal government. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | GAO is required to audit the consolidated financial statements of the U.S. government. Proper accounting and reporting practices are essential in the public sector. The U.S. government is the largest, most diverse, most complex, and arguably the most important entity on earth today. Its services--homeland security, national defense, Social Security, mail delivery, and food inspection, to name a few--directly affect the well-being of almost every American. But sound decisions on the future direction of vital federal government programs and policies are made more difficult without timely, accurate, and useful financial and performance information. Until the problems discussed in GAO's audit report on the U.S. government's consolidated financial statements are adequately addressed, they will continue to (1) hamper the federal government's ability to accurately report a significant portion of its assets, liabilities, and costs; (2) affect the federal government's ability to accurately measure the full cost as well as the financial and nonfinancial performance of certain programs while effectively managing related operations; and (3) significantly impair the federal government's ability to adequately safeguard certain significant assets and properly record various transactions. As in the 6 previous fiscal years, certain material weaknesses in internal control and in selected accounting and reporting practices resulted in conditions that continued to prevent GAO from being able to provide the Congress and American citizens an opinion as to whether the consolidated financial statements of the U.S. government are fairly stated in conformity with U.S. generally accepted accounting principles. Three major impediments to an opinion on the consolidated financial statements continue to be (1) serious financial management problems at DOD, (2) the federal government's inability to fully account for and reconcile transactions between federal government entities, and (3) the federal government's ineffective process for preparing the consolidated financial statements. For fiscal year 2003, 20 of 23 Chief Financial Officers (CFO) Act agencies received unqualified opinions, the same number received by these agencies for fiscal year 2002, up from 6 for fiscal year 1996. However, only 3 of the CFO Act agencies had neither a material weakness in internal control, an issue involving compliance with applicable laws and regulations, nor an instance of lack of substantial compliance with Federal Financial Management Improvement Act requirements. The requirement for timely, accurate, and useful financial and performance management information is greater than ever as the nation faces major long-term fiscal challenges that will require tough choices in setting priorities and linking resources to results. Given the nation's large and growing long-term fiscal imbalance, which is driven largely by known demographic trends and health care costs, coupled with new homeland security and defense commitments, the status quo is unsustainable. Current financial reporting does not clearly and transparently show the wide range of responsibilities, programs, and activities that may either obligate the federal government to future spending or create an expectation for such spending and provides an unrealistic and even misleading picture of the federal government's overall performance and financial condition. In addition, too many significant federal government commitments and obligations, such as Social Security and Medicare, are not fully and consistently disclosed in the federal government's financial statements and budget, and current federal financial reporting standards do not require such disclosure. A top-to-bottom review of government activities to ensure their relevance and fit for the 21st century and their relative priority is long overdue. The federal government needs a three-pronged approach to (1) restructure existing entitlement programs, (2) reexamine the base of discretionary and other spending, and (3) review and revise the federal government's tax policy and enforcement programs. New accounting and reporting approaches, budget control mechanisms, and metrics are needed for considering and measuring the impact of spending and tax policies and decisions over the long term. |
The cannabis plant, commonly known as marijuana, is the most widely used illicit drug in the United States. According to recent national survey figures, over 75 percent of the 14 million illicit drug users 12 years or older are estimated to have used marijuana alone or with other drugs in the month prior to the survey. Marijuana can be consumed in food or drinks, but most commonly dried portions of the leaves and flowers are smoked. Marijuana is widely used and the only major drug of abuse grown within the United States borders, according to the Drug Enforcement Administration. Marijuana is a controlled substance under federal law and is classified in the most restrictive of categories of drugs by the federal government. The federal Controlled Substances Act of 1970 (CSA) places all federally controlled substances into one of five “schedules,” depending on the drug’s likelihood for abuse or dependence, and whether the drug has an accepted medical use. Marijuana is classified under Schedule I, the classification reserved for drugs that have been found by the federal government to have a high abuse potential, a lack of accepted safety under medical supervision, and no currently accepted medical use. In contrast, the other schedules are for drugs of varying addictive properties, but found by the federal government to have a currently accepted medical use. The CSA does not allow Schedule I drugs to be dispensed upon a prescription, unlike drugs in the other schedules. In particular, the CSA provides federal sanctions for possession, manufacture, distribution or dispensing of Schedule I substances, including marijuana, except in the context of a government-approved research project. The potential medical value of marijuana has been a continuing debate. For example, beginning in 1978, the federal government allowed the first patient to use marijuana as medicine under the “Single Patient Investigational New Drug” procedure, which allows treatment for individual patients using drugs that have not been approved by the Food and Drug Administration. An additional 12 patients were approved under the procedure between 1978 and 1992. When the volume of applicants tripled, the Secretary of the Department of Health and Human Services (HHS) decided not to supply marijuana to any more patients. According to Kuromiya v. United States, HHS concluded that the use of the single patient Investigational New Drug procedure would not yield useful data to resolve the remaining safety and effectiveness issues. In 1999, an Institute of Medicine study commissioned by the White House Office of National Drug Control Policy recognized both a potential therapeutic value and potential harmful effects, particularly the harmful effects from smoked marijuana. The study called for more research on the physiological and psychological effects of marijuana and on better delivery systems. A 2001 report by the American Medical Association’s Council on Scientific Affairs also summarized the medical and scientific research in this area, similarly calling for more research. In May 1999, HHS released procedures allowing researchers not funded by the National Institute of Health to obtain research-grade marijuana for approved clinical studies. Sixteen proposals have been submitted for research under these procedures, and seven of the proposals had been approved as of May 2002. Some states have passed laws that create a medical use exception to otherwise applicable state marijuana sanctions. California was the first state to pass such a law in 1996 when California voters passed a ballot initiative, Proposition 215 (The Compassionate Use Act of 1996) that removed certain state criminal penalties for the medical use of marijuana.Since then, voters in Oregon, Alaska, Colorado, Maine, Washington and Nevada have passed medical marijuana initiatives, and Hawaii has enacted a medical marijuana measure through its legislature. While state criminal penalties do not apply to medical marijuana users defined by the state’s statute, federal penalties remain, as determined by the Supreme Court in United States v. Oakland Cannabis Buyers’ Cooperative. (Appendix II provides more information on the Supreme Court’s decision.) In California, Alaska, and Oregon, where voters passed medical marijuana laws through ballot initiatives, each state provided an official ballot pamphlet, which included the text of the proposed law and arguments from proponents and opponents. Opponents of the initiatives referred to federal marijuana prohibitions, legal marijuana alternatives, and evidence of the dangers of smoked marijuana. Proponents referred to supportive studies and positive statements from medical personnel. In Hawaii, where the state legislature enacted the medical marijuana measure, law enforcement officials, advocacy groups, and medical professionals made similar arguments for or against the proposed law during the legislative process. Oregon, Alaska, Hawaii, and California laws allow medical use of marijuana under certain conditions. All four states require a patient to have a physician’s recommendation to be eligible for medical marijuana. Consistent with their laws, Oregon, Alaska, and Hawaii also have designated a state agency to administer patient registries—which document a patient’s eligibility to use medical marijuana based on the written certification of a licensed physician—and issue cards to identify certified registrants. Also, laws in Oregon, Alaska, and Hawaii establish limits on the amounts of marijuana a patient is allowed to possess for medical purposes. California does not provide for state implementation of its law. In particular, California has not delegated authority to a state agency or established a statewide patient registry. In addition, California law does not prescribe a specific amount of marijuana that can be possessed for medical purposes. In the absence of specific statutory language, some local California jurisdictions have established their own registries, physician certification requirements, and guidelines for allowable marijuana amounts for medical purposes. Only Oregon has reviewed its medical marijuana program, and as a result of that review, has changed some of its procedures and practices, including verifying all doctor recommendations. To document their eligibility to engage in medical marijuana use, applicants in Oregon, Alaska, and Hawaii must register with state agencies charged with implementing provisions of the medical marijuana laws in those states (hereinafter referred to as registry states). In Oregon, the Department of Human Services is responsible, and in Alaska, the Department of Health and Social Services. In Hawaii, the Narcotics Enforcement Division within the Department of Public Safety is responsible for the state’s medical marijuana registry. Applicants meeting state requirements are entered into a registry maintained by each state. In California, a number of counties have established voluntary registries to certify eligibility under the state’ s medical marijuana law. The three registry states, Oregon, Alaska and Hawaii, have similar registry requirements. Potential registrants must supply written documentation by a physician licensed in that state certifying that the person suffers from a debilitating medical condition (as defined by the state statute) and in the physician’s opinion would benefit from the use of marijuana. They also must provide information on the name, address, and birth date of the applicant (and of their caregiver, where one is specified) along with identification to verify the personal information. In each state, registry agencies must verify the information in the application based on procedures set in that state’s statutes or regulations before issuing the applicant a medical marijuana identification card. All three states allow law enforcement officers to rely upon registry applications in lieu of registry cards to determine whether a medical use exception applies. Figure 1 provides an example of the registry card issued by Oregon. (Appendix III provides examples of registry cards from Alaska and Hawaii.) Hawaii’s Department of Public Safety requires that doctors submit the completed registry application to the state agency, and if approved, the medical use certification is returned to the doctor for issuance to their patient. By contrast, registry agencies in Oregon and Alaska require that the registry card applicant submit the physician statement as part of the application, and issue the card directly to the patient. Alaska allows registry cards to be revoked if the registrant commits an offense involving a controlled substance of any type, whereas Oregon and Hawaii allow registry cards to be revoked only for marijuana-related offenses, such as sale. Table 1 summarizes registry requirements and verification procedures of the responsible agencies in each registry state as of July 2002. California’s statute does not establish a state registry or require that a person or caregiver be registered to qualify for a medical use exception. California’s law requires that medical use has been recommended by a physician who has determined that the person’s health would benefit from the use of marijuana for certain symptoms or conditions. The exception applies based “upon the written or oral recommendation or approval of a physician.” After the medical marijuana law was passed, the California Attorney General assembled a task force to discuss implementation issues in light of the “ambiguities and significant omissions in the language of the initiative.” The task force recommended a statewide registry be created and administered by the Department of Health Services, among other things, to clarify California’s law. However, a bill incorporating many of the ideas agreed upon by the task force was not enacted by the California legislature. Some California communities have created voluntary local registries to provide medical marijuana users with registry cards to document that the cardholder has met certain medical use requirements. Figure 2 provides examples of patient and caregiver registry cards issued by San Francisco’s Department of Public Health. (See the following section for a discussion of caregivers.) According to a September 2000 letter by the California Attorney General, medical marijuana policies have been created in some counties. Local registries have been created in Humboldt, Mendocino, San Francisco, and Sonoma counties. A medical marijuana registry in the city of Arcata, located in Humboldt County, was discontinued, however, the Arcata police department accepts registry cards from Humboldt County. A more recent list of medical marijuana registries operated by a county or city was not available, an official with the Attorney General’s office said, because there is no requirement for counties or cities to report on provisions they adopt regarding medical use of marijuana. At least two counties have since approved development of county medical marijuana registries, in San Diego in November 2001, and in Del Norte, in April 2002. Several cannabis buyers’ clubs, or cannabis cooperatives may have also established voluntary registries of their members. (Appendix III provides additional discussion on state registry procedures in Oregon, Alaska, and Hawaii, procedures in selected California county registries, and examples of registry cards.) Laws in Oregon, Alaska, Hawaii, and California allow medical marijuana users to designate a primary caregiver. To qualify as a caregiver in the registry states, persons must be part of the state registry and be issued medical marijuana cards. Registered caregivers may assist registrants in their medical use of marijuana without violating state criminal laws for possession or cultivation of marijuana, within the allowed medical use amounts. Alaska allows registrants to designate a primary and alternate caregiver. Both must submit a sworn statement that they are at least 21 years old, have not been convicted of a felony drug offense, and are not currently on probation or parole. In Hawaii and Alaska, caregivers can serve only one patient at a time. Alaska, however, allows exceptions for patients related to the caregiver by blood or marriage, or with agency approval, such as circumstances where a patient resides in a licensed hospice program. Oregon does not specify a limit to the number of patients one caregiver may serve. Table 2 provides information on definitions and caregiver provisions in Oregon, Alaska, and Hawaii. California’s statute also allows qualified medical marijuana users to designate a primary caregiver. The statue defines “primary caregiver” to mean “the individual designated by the person exempted under this section who has consistently assumed responsibility for the housing, health or safety of that person.” There is no requirement that the patient–caregiver relationship be registered or otherwise documented, nor is there a specified limit to the number of patients that can designate a particular caregiver. In all four states, patients must obtain a physician’s diagnosis that he or she suffers from a medical condition eligible for marijuana use under that state’s statute, and a physician recommendation for the use of marijuana. California does not have a requirement that the diagnosis or recommendation be documented, as the other states do. In the registry states, patients must supply written documentation of their physician’s medical determination and marijuana recommendation in their registry applications. This documentation must conform with program requirements, reflecting that the physician made his or her recommendation in the context of a bona fide physician-patient relationship. California’s law does not require patients to submit documentation of a physician’s determination or recommendation to any state entity, nor does it specify particular examination requirements. According to California’s law, marijuana may be used for medical purposes “where that medical use is deemed appropriate and has been recommended by a physician who has determined that the person’s health would benefit from the use of marijuana” in treating certain medical conditions; such recommendations may be oral or written. The physician certification form adopted by Hawaii’s Department of Public Safety calls for doctors recommending marijuana to a patient to certify that “I have primary responsibility for the care and treatment of the named patient and based on my professional opinion and having completed a medical examination and/or full assessment of my patient’s medical history and current medical condition in the course of a bona fide physician-patient relationship have issued this written certificate.” Similarly in Alaska, the recommending physician signs a statement that they personally examined the patient on a specific date, and that the examination took place in the context of a bona fide physician-patient relationship. Under Oregon’s medical marijuana law, the patient’s attending physician must supply physician documentation. Oregon’s administrative rules defining “attending physician” were amended in March 2002 to more fully describe the conditions for meeting the definition. To qualify, the physician must have established a physician-patient relationship with the patient and must diagnose the patient with a debilitating condition in the context of that relationship. Agency officials stated that they changed the definition of an attending physician in light of information that one doctor responsible for many medical marijuana recommendations had not followed standard physician-patient practices, such as keeping written patient records. (See physician section.) Under its regulations, the Department of Human Services will contact each physician making a medical marijuana recommendation to assure that the physician is an “attending physician” and, with patient approval, the department may review the physician’s patient file in connection with this inquiry. The laws in all four states we reviewed identify medical conditions for which marijuana may be used for medical purposes. Table 3 displays the allowed medical conditions for which marijuana may be used in each state. (See appendix IV for descriptions from general medical sources of the allowable conditions identified by the state laws.) Statutes in Oregon, Alaska, and Hawaii define the maximum amount of marijuana and the number of plants that an individual registrant and their caregiver may possess under medical marijuana laws, while California’s statute does not provide such definitions. Oregon and Hawaii regulations also provide definitions of marijuana plant maturity. Table 4 provides the definitions of quantity and maturity for each registry state. California’s statute does not specify an amount of marijuana allowable under medical use provisions; however, some local jurisdictions have established their own guidelines. The statute’s criminal exemption is for “personal medical purposes” but does not define an amount appropriate for personal medical purposes. The California Attorney General’s medical marijuana task force debated establishing an allowable amount but could not come to a consensus on this issue, proposing that the Department of Health Services determine an appropriate amount. Participants did agree that the amount of marijuana a patient may possess might well depend on the type and severity of illness. They concluded that an appropriate amount of marijuana was ultimately a medical issue, better analyzed and decided by medical professionals. In the absence of state specified amounts, a number of the state’s 58 counties and some cities have informally established maximum allowable amounts of marijuana for medical purposes. According to the September 2000 summary by the California Attorney General’s office, the amount of marijuana an individual patient and their caregiver were allowed to have varied, with a two-plant limit in one area, and a 48 plant (indoors, with mature flowers) limit in another area. In May 2002, Del Norte County raised their limit from 6 plants to 99 plants per individual patient. California, Oregon, Alaska, and Hawaii prohibit medical marijuana use in specific situations relating to safety or public use. Patients or caregivers who violate these prohibitions are subject to state marijuana sanctions and, in the registry states, may also forfeit their registry cards. Table 5 reflects the various states’ safety or public use restrictions. Oregon was the only state of the four we reviewed to have conducted a management review of their state’s medical marijuana program. The Oregon Department of Human Services conducted the review after concerns arose that a doctor’s signature for marijuana recommendations had been forged. The review team reported a number of program areas needing improvement, and proposed a corrective plan of action. Most of the actions had been completed, as of May 2002. Lack of verification of physician signature was a key problem identified by the team. All physician signatures are now verified. A number of other team findings had to do with program management and staffing. The Program Manager was replaced, additional staff was added, and their roles were clarified, according to officials. Another area of recommendation was the processing of applications and database management, such as how to handle incomplete applications, handling of voided applications, edit checks for data entry, and reducing the application backlog. As of May 2002, some action items were still open, such as computer “flags” for problem patient numbers or database checks on patients and caregivers at the same address. A relatively small number of people are registered as medical marijuana users in Oregon, Hawaii, and Alaska. In those states, most registrants were over 40 years old. Severe pain and muscle spasms (spasticity) were the most common medical conditions for which marijuana was recommended in the states where data was gathered. Relatively few people are registered as medical marijuana users in Alaska, Hawaii and Oregon. In these states, registry data showed that the number of participants registered was below 0.05 percent or less of the total population of each respective state. Data doesn’t exist to identify the total population of people with medical conditions that might qualify for marijuana use because not all the conditions specified in the state’s laws are diseases for which population data is available. For example, a debilitating condition of “severe pain” may be a symptom for a number of specific medical conditions, such as a back injury, however not all patients with back injury suffer severe pain. Table 6 shows the number of patients registered in Oregon, Hawaii, and Alaska, at the time of our review as compared to the total population from the U.S. Census Bureau population projections for 2002. There is no statewide data on participants in California because the medical marijuana law does not provide for a state registry. We obtained information from four county registries in San Francisco, Humboldt, Mendocino and Sonoma counties. In each of these registries, participation was 0.5 percent or less than the respective county’s population. However, because the local registries are voluntary it is unknown how many people in those jurisdictions have received medical recommendations from their doctors for marijuana but have not registered. Table 7 shows the number of patients registered in four California counties and as a percent of the population for those counties, since each registry was established. Most medical marijuana registrants in Hawaii and Oregon—the states where both gender and age data were available—were males over 40 years old. Hawaii and Oregon were the only states that provided gender information; in both cases approximately 70 percent of registrants were men. In Alaska, Hawaii, and Oregon state records showed that over 70 percent of all registrants in each state were 40 years of age or older. Only in one state was there a person under the age of 18 registered as a medical marijuana user. Table 8 shows the distribution of registrants by age in the registry states. In California, none of the local jurisdictions we met with kept information on participants’ gender, and only Sonoma County Medical Association provided information on their registrants’ age. The age of medical association registrants was similar to participants in the state registries, only slightly younger. Over 60 percent of participants that have had their records reviewed by medical associations were 40 years or older. Most medical marijuana recommendations in states where data are collected have been made for applicants with severe pain or muscle spasticity as their medical condition. Conditions allowed by the states’ medical marijuana laws ranged from illnesses such as cancer and AIDS, to symptoms, such as severe pain. Information is not collected on the conditions for which marijuana has been recommended in Alaska or California. However, data from Hawaii‘s registry showed that the majority of recommendations have been made for the condition of severe pain or the condition of muscle spasticity. Likewise, data from Oregon’s registry showed that, 84 percent of recommendations were for the condition of severe pain or for muscle spasticity. Table 9 shows the number and percentage of patients registered by types of conditions in Oregon and Hawaii. On the basis of records from the Oregon registry, we reviewed the information provided by doctors for additional insight into the conditions for which registrants use marijuana. The Oregon registry keeps track of secondary conditions in cases where the recommending doctor specified more than one condition. We examined the pool of secondary conditions associated with severe pain and muscle spasms, the two largest condition categories. About 40 percent of those with severe pain reported muscle spasms, migraines, arthritis, or nausea as a secondary medical condition. The most common secondary conditions reported by those with spasms were pain, multiple sclerosis, and fibromyalgia, accounting for 37 percent of the secondary conditions for spasms. A variety of other secondary conditions were identified in the Oregon data, such as acid reflux, asthma, chronic fatigue syndrome, hepatitis C, and lupus. In the two states, Hawaii and Oregon, where data on physicians is maintained, few physicians have made medical marijuana recommendations. Of the pool of recommending physicians in Oregon, most physicians made only one to two recommendations. Over half of the medical organizations we contacted provide written guidance for physicians considering recommending marijuana. Only a small percentage of physicians in Hawaii and Oregon were identified by state registries as having made recommendations for their patients to use marijuana as medicine. These two states maintain information on recommending physicians in their registry records. No information was available on physician participation in California and Alaska. In Hawaii, at the time of our review, there were 5,673 physicians licensed by the state’s medical board. Of that number, 44 (0.78 percent) physicians had recommended marijuana to at least one of their patients since the legislation was passed in June 2000. In Oregon, at the time of our review, 435 (3 percent) of the 12,926 licensed physicians in the state had participated in the medical marijuana program since May 1999. Both Hawaii and Oregon’s medical marijuana registration programs are relatively new, which may account for the low level of participation by physicians in both states. Oregon’s program has operated for a year longer than Hawaii’s, however physician participation overall is low in both states. A Hawaii medical association official told us that he believes physicians consider a number of factors when deciding whether to recommend marijuana as medicine, such as the legal implications of recommending marijuana, lack of conclusive research results on the drug’s medical efficacy, and a doctor’s own philosophical stance on the use of marijuana as medicine. The lower federal courts are divided in terms of whether doctors can make medical marijuana recommendations without facing federal enforcement action, including the revocation of doctors’ DEA registrations that allow them to write prescriptions for federally controlled substances. In one case, the district court for the Northern District of California held that the federal government could not revoke doctors’ registrations, stating that the de-registration policy raised “grave constitutional doubts” concerning doctors’ exercise of free speech rights in making medical marijuana recommendations. In the other case considering this issue, the district court for the District of Columbia ruled that the federal government could revoke doctors’ registrations, stating that “ven though state law may allow for the prescription or recommendation of medicinal marijuana within its borders, to do so is still a violation of federal law under the CSA,” and “there are no First Amendment protections for speech that is used ‘as an integral part of conduct in violation of a valid criminal statute.” Oregon is the only state we reviewed which has registry records that identify recommendations by doctor. Few Oregon physicians made recommendations to use medical marijuana to more than two patients. According to registry data, 82 percent of the participating physicians made one or two recommendations, and 18 percent made three or more recommendations. Table 10 shows a breakdown of the frequency by which physicians made marijuana recommendations. State or law enforcement officials in Oregon, California, and Hawaii indicated that they were each aware of a particular physician in their state that had recommended marijuana to many patients. In Alaska, a state official knew of no physician that had made many recommendations. In Oregon and California the state medical boards have had formal complaints filed against these physicians for alleged violations of the states’ Medical Practices Acts, which establish physician standards for medical care. The complaints charge the physicians with unprofessional conduct violations such as failure to conduct a medical examination, failure to maintain adequate and accurate records, and failure to confer with other medical care providers. In Oregon, the physician recommending marijuana to over 800 patients was disciplined. The California case was still pending. At the time of our review, there was no medical practice complaint filed against the Hawaiian doctor known to have made many marijuana recommendations. In all four states, professional medical associations provide some guidance for physicians in regards to recommending marijuana to patients. State medical boards, in general, have limited involvement in providing this type of guidance. Table 11 indicates the type of guidance available from these medical organizations in each state. The guidance to physicians considering recommending marijuana to a patient in Oregon, for example, includes avoiding engaging in any discussions with a patient on how to obtain marijuana, and to avoid providing a patient with any written documentation other than that in the patient’s medical records. The medical association also advises physicians to clearly document in a patient’s medical records conversations that take place between the physician and patient about the use of marijuana as medicine. Oregon’s medical association notes that until the federal government advises whether it considers a physician’s medical marijuana recommendation in a patient chart to violate federal law, no physician is fully protected from federal enforcement action. Most of the state medical board officials we contacted stated that the medical boards do not provide guidance for physicians on recommending marijuana to patients. The medical boards do become involved with physicians making marijuana recommendations if a complaint for violating state medical practices is filed against them. Once a complaint is filed, the boards investigate a physician’s practice. Any subsequent action occurs if the allegations against a doctor included violations of the statutes regulating physician conduct. California medical board’s informal guidance states that physicians recommending marijuana to their patients should apply the accepted standards of medical responsibility such as the physical examination of the patient, development of a treatment plan, and discussion of side effects. In addition, the board warns physicians that their best legal protection is by documenting how they arrived at their decision to recommend marijuana as well as any actions taken for the patient. Data are not readily available to show whether the introduction of medical marijuana laws have affected marijuana-related law enforcement activities. Assessing such a relationship would require a statistical analysis over time that included measures of law enforcement activities, such as arrests, as well as other measures that may influence law enforcement activities. It may be difficult to identify the relevant measures because crime is a sociological phenomena influenced by a variety of factors.Local law enforcement officials we spoke with about trends in marijuana law enforcement noted several factors, other than medical marijuana laws, important in assessing trends. These factors included changes in general perceptions about marijuana, shifts in funding for various law enforcement activities, shifts in local law enforcement priorities from one drug to another, or changes in emphasis from drugs to other areas, such as terrorism. Demographics might also be a factor. The limited availability of data on marijuana-related law enforcement activity illustrates some of the difficulties in doing a statistically valid trend analysis. To fully assess the relationship between the passage of state’s medical marijuana laws and law enforcement, one would need data on marijuana related arrests or prosecutions over some period of time, and preferably an extended period of time. Although state-by-state data on marijuana-related arrests is available from the FBI Uniform Crime Reports (UCR), at the time of our review, only data up to the year 2000 was available. Yearly data would be insufficient for analytic purposes since the passage of the medical marijuana initiatives or law in three of the states— Oregon (November 1998), Alaska (November 1998), and Hawaii (June 2000)—is too recent to permit a rigorous appraisal of trends in arrests and changes in them. Furthermore, although California’s law took effect during 1996 providing a longer period of data, it is also important to note that the FBI cautions about UCR data comparisons between time periods because of variations in year-to-year reporting by agencies. Similar data limitations would occur using marijuana prosecutions as a measure of trends in law enforcement activity. Data on marijuana prosecutions are not collected or aggregated at the federal level by state. At the state level, for the four states we reviewed, the format for collecting the data, or time period covered also had limitations. For example in California, the state maintains “disposition” data that includes prosecutions, but reflects only the most serious offenses, so that marijuana possession that was classified as a misdemeanor would not be captured if the defendant was also charged with possession of other drugs, or was involved with theft or other non-misdemeanor crimes. Further, the data is grouped by the year of final disposition, not when the offense occurred. Hawaii does not have statewide prosecution data. At the time of our review, prosecution data from Oregon’s statewide Law Enforcement Data System was only available for 1999 and 2000. We interviewed officials from 37 selected federal, state, and local law enforcement organizations in the four states to obtain their views on the effect, if any, state medical marijuana laws had on their law enforcement activities. Officials representing 21 of the organizations we contacted indicated that medical marijuana laws had had little impact on their law enforcement activities for a variety of reasons, including very few or no encounters involving medical marijuana registry cards or claims of a medical marijuana defense. For example: The police department on one Hawaiian island had never been presented a medical marijuana registry card, and only 15 registrants lived on the island. In Alaska, a top official for the State Troopers Drug Unit had never encountered a medical marijuana registry card in support of claimed medical use. In Oregon, one district attorney reported having less than 10 cases since the law was passed where the defendant presented a medical marijuana defense.In Los Angeles County, an official in the District Attorney’s office stated that only three medical marijuana cases have been filed in the last two years in the Central Branch office, two of the cases involving the same person. Some of the federal law enforcement officials we interviewed indicated that the introduction of medical marijuana laws has had little impact on their operations. Senior Department of Justice officials said that the Department’s overall policy is to enforce all laws regarding controlled substances, however they do have limited resources. Further, the federal process of using a case-by-case review of potential marijuana prosecutions has not changed as a consequence of the states’ medical marijuana laws. These officials said that U.S. Attorneys have their own criteria or guidelines for which cases to prosecute that are based on the Department’s overall strategies and objectives. Law enforcement officials in the selected states also told us that, given the range of drug issues, other illicit drug concerns, such as rampant methamphetamine abuse or large-scale marijuana production are higher priorities than concerns about abuse of medical marijuana. In at least one instance, this emphasis was said to reflect community concerns—in Hawaii, one prosecuting attorney estimated that one-third to one-half of the murders and most hostage situations in the county involved methamphetamines. He said businesses ask why law enforcement is bothering with marijuana when they have methamphetamines to deal with. Although many of the officials with other organizations we contacted did not clearly indicate whether medical marijuana laws had, or had not, had major impact on their activities, officials with two organizations said that medical marijuana laws had become a problem from their perspective. Specifically, an official with the Oregon State Police Drug Enforcement Section said that during 2000 and 2001, there were 14 cases in which the suspects had substantial quantities of processed or growing marijuana and were arrested for distribution of marijuana for profit, yet were able to obtain medical marijuana registry cards after their arrests. Because the same two defense attorneys represented all the suspects, the police official expressed his view that the suspects might have been referred to the same doctor, causing the official to speculate about the validity of the recommendations. In Northern California—an area where substantial amounts of marijuana are grown—officials with the Humboldt County Drug Task Force told us that they have encountered growers claiming to be caregivers for multiple medical marijuana patients. With a limit of 10 plants per person established by the Humboldt County District Attorney, growers can have hundreds of plants officials said, and no documentation to support their medical use claims is required. Over one-third of officials from the 37 law enforcement organizations told us that they believe that the introduction of medical marijuana laws have, or could make it, more difficult to pursue or prosecute some marijuana cases. In California, some local law enforcement officials said that their state’s medical marijuana law makes them question whether it is worth pursuing some criminal marijuana cases because of concerns about whether they can effectively prosecute (e.g., with no statutory limit on the number of marijuana plants allowed for medical use, the amount consistent with a patient’s personal medical purposes is open to interpretation). In Oregon, Hawaii, and Alaska where specific plant limits have been established, some law enforcement officials and district attorneys said that they were less likely to pursue marijuana cases that could be argued as falling under medical use provisions. For example, one Oregon District Attorney stated that because they have limited resources the District Attorneys might not prosecute a case where someone is sick, has an amount of marijuana within the medical use limit, and would probably be approved for a card if they did apply. Officers in Hawaii reported reluctance of a judge to issue a search warrant until detectives were certain that cultivated marijuana was not being grown for medical use, or that the growth was over the 25-plant limit qualifying for felony charges. Less concrete, but of concern to law enforcement officials were the more subtle consequences attributed to the passage of state medical marijuana laws. Officials in over one-fourth of the 37 law enforcement organizations we interviewed indicated they believe there has been a general softening in public attitude toward marijuana, or public perception that marijuana is no longer illegal. For example, state troopers in Alaska said that they believe that the law has desensitized the public to the issue of marijuana, reflected in fewer calls to report illegal marijuana activities than they once received. Hawaiian officers stated that it is their view that Hawaii’s law may send the wrong message because people may believe that the drug is safe or legal. Several law enforcement officials in California and Oregon cited the inconsistency between federal and state law as a significant problem, particularly regarding how seized marijuana is handled. According to a California Attorney General official, state and local law enforcement officials are frequently faced with this issue if the court or prosecutor concludes that marijuana seized during an arrest was legally possessed under California law, and law enforcement is ordered to return the marijuana. To return it puts officials in violation of federal law for dispensing a Schedule I narcotic, according to the California State Sheriffs’ Association, and in direct violation of the court order if they don’t return it. The same issue has arisen in Portland, Oregon, officials said, when the Portland police seized 2.5 grams of marijuana from an individual. After the state dismissed charges, the court ordered the return of the marijuana to the individual, who was a registered medical marijuana user. The city of Portland appealed the court order on grounds that its police officers could not return the seized marijuana without violating federal law, but the Oregon court of appeals rejected this argument in Oregon v. Kama.Oregon officials said that DEA then obtained a federal court order to seize the marijuana from the Portland police department. The Department of Justice stated in comments on a draft of this report that they believe conflicts between federal and non-federal law enforcement over the handling of seized marijuana has been and will continue to be a problem. Law enforcement officials in all four states identified areas of their medical marijuana laws that can hamper their marijuana enforcement activities because the law could be clearer or provide better control. In California, key issues were lack of a definable amount of marijuana for medical use, and no systematic way to identify who qualifies for the exemption. In Oregon, officers were concerned about individuals registering as medical marijuana users after they have been arrested, and timely law enforcement access to the registry information. Officials with about one-fourth of the law enforcement organizations in Hawaii, California and Oregon shared the concern about the degree of latitude given to physicians in qualifying patients for medical use. We provided a copy of a draft of this report to the Department of Justice for review and comment. In a September 27, 2002 letter, DOJ’s Acting United States Assistant Attorney General for Administration commented on the draft. DOJ’s comments are summarized below and presented in their entirety in appendix V. In its comments, DOJ noted that the report fully described the current status of the programs in the states reviewed. However, DOJ stated that the report failed to adequately address some of the serious difficulties associated with such programs. Specifically, according to DOJ, the report does not adequately address, through any considered analysis, issues related to the (1) inherent conflict between state laws permitting the use of marijuana and federal laws that do not; (2) potential for facilitating illegal trafficking; (3) impact of such laws on cooperation among federal, state, and local law enforcement; and (4) lack of data on the medicinal value of marijuana. DOJ further stated that our use of the phrase “medical marijuana” implicitly accepts a premise that is contrary to existing federal law. In regard to the first issue—state laws that permit the use of marijuana and federal laws that do not—DOJ pointed out that the most fundamental problem with the report is that it failed to emphasize that there is no federally recognized medicinal use of marijuana and thus possession or use of this substance is a federal crime. We disagree, and believe that we have clearly described federal law on the use of marijuana. On page 1 of our report, we specifically state that federal law does not recognize any accepted medical use for marijuana and individuals remain subject to federal prosecution for marijuana possession regardless of state medical marijuana laws. In other comments about state and federal laws, DOJ also pointed out that our report failed to mention that state medical marijuana laws undermine (1) the closed system of distribution for controlled substances under the Controlled Substances Act and (2) the federal government’s obligations under international drug control treaties which, according to DOJ, prohibit the cultivation of marijuana except by persons licensed by, and under the direct supervision of, the federal government. As discussed in our report, the legal framework for our work was the Supreme Court’s opinion in United States v. Oakland Cannabis Buyers Cooperative, 532 U.S. 483 (2001) which held that the federal government can enforce marijuana prohibitions without regard to a medical necessity defense, even in states with medical marijuana laws. During our review, we saw no reason to expand our analysis beyond that set forth in the Supreme Court’s decision. This is especially true since the scope of our work was to examine how the selected states were implementing their medical marijuana laws—not the issues raised in DOJ comments. Regarding the second issue concerning the potential for illegal trafficking, DOJ commented that our report did not mention that state medical marijuana laws are routinely being abused to facilitate traditional illegal trafficking. DOJ also highlighted the lack of guidance provided by the California state government to implement its medical marijuana law as contributing to the problem in California. Our report discusses the views of law enforcement officials representing 37 organizations in the four states—including federal officials—regarding the impact of state medical marijuana laws on their law enforcement efforts. Our report presented the views they conveyed to us. Thus, in those instances where law enforcement officials, including representatives of DEA and U.S. Attorneys’ offices, discussed what they considered instances of abuse or potential abuse, we discussed it in our report. During our review, none of the federal officials we spoke with provided information to support a statement that abuse of medical marijuana laws was routinely occurring in any of the states, including California. DOJ further asserted that we should include information on the “underlying criminal arena,” on homicides related to marijuana cultivation, and on illegal marijuana production and diversion. These issues were beyond the scope of our work. In regard to its third comment pertaining to cooperation among federal, state, and local law enforcement officials, DOJ stated that our report did not reflect DEA’s experience—a worsening of relations between federal, state, and local law enforcement. DOJ’s comments provided specific examples of incidents involving conflicts between DEA and non-federal law enforcement officials, but these examples were not provided to us during our fieldwork. In comments on a summary of law enforcement opinions, some of the non-federal law enforcement officials we interviewed also stated we should discuss the conflict between state medical marijuana laws and federal laws as it related to seized marijuana.We modified our draft to include a discussion of these concerns, and have likewise included DOJ’s comment. It is also important to note, however, that contrary to DOJ’s suggestion, our report included a discussion about the concerns of the law enforcement officials regarding a “softening” of the public perception about marijuana. Finally, DOJ’s point that Oregon’s medical marijuana law negatively impacts federal seized asset sharing was an issue outside the scope of our review. In regard to the fourth issue—lack of data on the medicinal value of marijuana—DOJ stated that our discussion of the debate over the medical value of marijuana is inadequate and does not present an accurate picture. We believe our report adequately discusses that a continuing debate exists. The overall objective of our review was to examine the implementation of state medical marijuana laws, and an analysis of the scientific aspects of the medical marijuana debate was beyond the scope of our work. We do, however, footnote various studies so that readers can access additional information on the studies if they desire. Finally, we disagree with DOJ’s comment that our use of the term medical marijuana accepts a premise contrary to federal law, given that we specifically defined the term in relation to state, not federal, law. As mentioned earlier, our report specifically states that federal law does not recognize any accepted medical use for marijuana and individuals remain subject to federal prosecution for marijuana possession regardless of state medical marijuana laws. Furthermore, the introduction to the report clearly points out that, throughout the report, we use the phrase medical marijuana to describe marijuana use that qualifies for a medical use exception under state law. DOJ also provided technical comments, which we have included in this report, where appropriate. In addition, as mentioned earlier, some of the representatives of state law enforcement organizations provided comments on the section of the report dealing with their perceptions, and we have made changes to the report, where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Ranking Minority Member, Subcommittee on Criminal Justice, Drug Policy and Human Resources, and the Chairman and Ranking Minority Member, House Committee on Government Reform; the Chairman and Ranking Minority Member of the House Judiciary Committee; the Chairman and Ranking Minority Member of the Senate Judiciary Committee; the Attorney General; and the Director, Office of Management and Budget. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on this report, please contact me or John Mortin on (202) 512 –8777. Key contributors are acknowledged in appendix V. Our overall objectives were to provide fact-based information on how selected states implement laws that create a medical use exception to specified state marijuana prohibitions, and to document the impact of those laws on law enforcement efforts. Specifically, for selected states, our objectives were to provide information on (1) their approach to implementing their medical marijuana laws and how they compare, and the results of any state audits or reviews, (2) the number of patients that have had doctors recommend marijuana for medical use in each state, for what medical conditions, and by age and gender characteristics, (3) how many doctors are known to have recommended marijuana in each, and what guidance is available for making these recommendations, and (4) perceptions of federal and state law enforcement officials, and whether data are available to show how law enforcement activities have been affected by the exceptions provided by these states’ medical marijuana laws. We conducted our review between September 2001 and June 2002 in accordance with generally accepted government auditing standards. Eight states have enacted medical marijuana statutes. We selected four of those states based on the length of time the laws had been in place, the availability of data, and congressional interest. Two of the eight states, Nevada and Colorado, were not selected because their laws had not been in place for at least 6 months when our review began. Another two states, Maine and Washington, were not selected because they do not have state registries to obtain information on program registrants. Alaska, Oregon and Hawaii do have state registries and had laws in place for at least 6 months. California’s law was enacted in 1996; however, the state does not have a participant registry. We included it because some local registry information was available, and the requestor specifically requested information on California and Oregon. Our sample consists of these four states: California, Oregon, Alaska, and Hawaii. We conducted on-site data collection and interviews with senior officials at state registries in Oregon and Hawaii, county offices in selected California counties, and the senior official in Alaska by phone and email. We examined applicable federal and state laws and regulations and obtained and analyzed available information on program implementation, program audits, and program participation by patients and doctors. State and California county officials voluntarily supplied data on medical marijuana program registrants and some provided data on physician participation. Officials did not provide names to protect participants’ confidentiality. We reviewed the data for reasonableness and followed up with appropriate individuals about any questions concerning the data. Given the confidentiality of the information, we could not check the data back to source documents. We also interviewed knowledgeable state and county officials to learn how the data was collected and processed, and to gain a full understanding of the data. We determined the data was reliable enough for the limited purposes of this report. However, the data only reflects those that have registered with state and county programs. No estimate is available on the number of medical marijuana users that have not registered with a program. Additionally, data from the three state registries are not representative of participation in other states for which we did not collect data. Similarly, data from select California counties only reflect each county, not other counties where we did not conduct audit work. We used a nonprobability sample to select law enforcement representatives to provide examples of the policies, procedures, experiences, and opinions of law enforcement regarding state medical marijuana laws. Our selection of these law enforcement representatives was not designed to enable us to project their responses to others, in this case, other law enforcement officials. Feedback was requested from officials at law enforcement organizations we visited, and incorporated where appropriate. We discussed state medical marijuana laws with federal, state and local law enforcement officials in the states of California, Hawaii, Oregon and Alaska. On-site interviews were conducted in all but Alaska. Federal officials in each state included representatives from the office of the U.S. Attorney and the Drug Enforcement Administration (DEA). The specific U.S. Attorney and DEA office and officials we met with were selected by the Department of Justice as the most knowledgeable on the subject. For a statewide perspective, we interviewed representatives from the Attorney General’s office and at least one statewide association in California and Oregon representing law enforcement officials. This included representatives from the following: Oregon Attorney General Oregon Association of Chiefs of Police California Attorney General California District Attorney Association California State Sheriff’s Association Hawaii Attorney General Hawaii Department of Public Safety Alaska Attorney General Alaska State Troopers For a local law enforcement perspective, we interviewed district attorney and local police department officials. Selection was judgmental and based on a number of factors, including: suggestions by federal or state officials, jurisdictions where trips were planned to interview state medical marijuana registry program officials or state officials, or large portions of the state population were covered by the department. Local law enforcement representatives included the following: Marion County Oregon District Attorney Portland Oregon District Attorney Portland Oregon Bureau of Police Oregon State Police Oregon Association of Chiefs of Police (Dallas Oregon Police Chief participated) Clackamus County Oregon Sheriff’s Office Los Angeles California District Attorney Los Angeles California Police Department San Bernardino California Police Department Orange California Police Department Eureka California Police Department/ Humboldt (state) Drug Task Force Arcata California Police Department San Francisco California Police Department Hawaii County Hawaii Prosecuting Attorney Honolulu County Hawaii Prosecuting Attorney Hawaii County Hawaii Police Department Honolulu Hawaii Police Department Maui Hawaii Police Department Anchorage Alaska District Attorney Anchorage Alaska Police Department Juneau Alaska Police Department We requested comments from DOJ on a draft of this report in August 2002. The comments are discussed near the end of the letter and are reprinted as appendix V. DOJ also provided technical comments on the draft of this report and we incorporated DOJ’s comments where appropriate. In addition, we requested comments from the law enforcement officials we interviewed pertaining to the section of this report dealing with their perceptions and included their comments where appropriate. Finally, we verified the information we obtained on the implementation of state medical marijuana laws with the officials we contacted during our review. Under the federal Controlled Substances Act of 1970 (CSA), marijuana is classified as a Schedule I controlled substance, a classification reserved for drugs found by the federal government to have no currently accepted medical use. 21 U.S.C. 812(c), Schedule I (c)(10). Consistent with this classification system, the CSA does not allow Schedule I drugs to be dispensed upon a prescription, unlike drugs in the less restrictive drug schedules. Id. 829. In particular, the CSA prohibits all possession, manufacture, distribution or dispensing of Schedule I substances, including marijuana, except in the context of a government- approved research project. Id. 823(f), 841(a)(1), 844. Some states have passed laws that create a medical use exception to otherwise applicable state marijuana sanctions. California was the first state to pass such a law, when, in 1996, California voters passed a ballot initiative, Proposition 215, which removed certain state criminal penalties for the medical use of marijuana. In the wake of Proposition 215, various cannabis clubs formed in California to provide marijuana to patients whose physicians had recommended such treatment. In 1998, the United States sued to enjoin one of these clubs, the Oakland Cannabis Buyers’ Cooperative, from cultivating and distributing marijuana. The United States argued that, whether or not the Cooperative’s actions were legal under California law, they violated the CSA. Following lower court proceedings, the U.S. Supreme Court granted the government’s petition for a writ of certiorari to review whether the CSA permitted the distribution of marijuana to patients who could establish “medical necessity.” United States v. Oakland Cannabis Buyers’ Cooperative, 532 U.S. 483 (2001). Although the tension between California’s Proposition 215 and the broad federal prohibition on marijuana was the backdrop for the Oakland Cannabis case, the legal issue addressed by the Supreme Court did not involve the constitutionality of either the federal or state statute. Rather, the Court confined its analysis to an interpretation of the CSA and whether there was a medical necessity defense to the Act’s marijuana prohibitions. The Court held that there was not. While observing that the CSA did not expressly abolish the defense, the Court stated that the statutory scheme left no doubt that the defense was unavailable for marijuana. Because marijuana appeared in Schedule I, it reflected a determination that marijuana had no currently accepted medical use for purposes of the CSA. The Court concluded that a medical necessity defense could not apply under the CSA to a drug determined to have no medical use. The Oakland Cannabis case upheld the federal government’s power to enforce federal marijuana prohibitions without regard to a claim of medical necessity. Thus, while California (and other states) exempt certain medical marijuana users and their designated caregivers from state sanctions, these individuals remain subject to federal sanctions for marijuana use. How states implemented registry requirements in the three registry states, such as which agency administers the registry or the number of staff to manage it, varied in some ways and were similar in other ways. Similarly, the county-based registries in California had some differences and commonalities. In Oregon, the Department of Human Services is designated to maintain the state medical marijuana registry. A staff of six is responsible for reviewing and verifying incoming applications and renewals, including following up on those that are incomplete, and input and update of the database. Recommending physicians are sent, and must respond to a verification letter for the application to be approved. By statute in Oregon, an applicant can be denied a card for only two reasons—submitting incomplete or false information. According to the State Public Health Officer, the scope of the Department of Human Services responsibility is to see to that there is a written determination of the patient’s condition by a legitimate doctor, and includes an attending physician recommendation that the patient might benefit from using marijuana. He stated that the staff does not question a doctor’s recommendation for medical marijuana use. The law is clear, he said. It is up to the physician to decide what is best. The Oregon Department of Human Services also considers the addition of new conditions to the list of those acceptable for medical use of marijuana, as authorized by Oregon’s medical marijuana statute. At the time of our review, only one of the eight petitions that had been reviewed by the Department had been approved—agitation due to Alzheimer’s disease. Most of the petitioned conditions have had a psychological basis, the State Public Health Officer said. Alaska’s statute designates the Department of Health and Social Services to manage the state medical marijuana registry. The full time equivalent of one half-time person is responsible for registry duties, including checking applications for accuracy and completeness and entering the information into the registry. The physician’s license is checked for approval to practice in Alaska, and if a caregiver is designated the registry is checked to assure they are only listed as a caregiver for one person unless otherwise approved by the Department. Patients, physicians and caregivers are also contacted to verify information as appropriate. If all Alaska statutory requirements are met, a medical marijuana registry identification card is issued (see fig. 4). Registry cards are denied in Alaska if the application is not complete, the patient is not otherwise qualified to be registered, or if the information in the application is found to be false. Alaska’s statute allows the Department to add debilitating medical conditions to the approved list for use of marijuana. A procedure for requesting new conditions is outlined in state regulations. To date, there have been no requests to consider new conditions and none have been added. The medical marijuana law passed by the Hawaiian legislature designates the state Department of Public Safety to administer the Hawaiian medical marijuana registry. One person within Public Safety’s Narcotics Enforcement Division staffs the registry. This person is responsible for reviewing and approving applications and renewals as complete, inputting applicant information into the database, and responding to any law enforcement inquiries. Verification procedures in Hawaii are similar to those followed in other states. See figure 4 for an example of Hawaii’s registry card. Registration application requirements and procedures for the voluntary California registries we reviewed were unique to each county, but shared some procedures with the programs established in the registry states. In Humboldt County, the patient must submit an application and physician recommendation to the county Department of Health and Human Services, with a $40.00 fee. Applicants are interviewed, photographed, and their county residency documents are checked during an in-person interview. To protect the confidentiality of doctors, after the physician recommendation has been verified, the physician portion of the application is detached and shredded. Applications are denied if the patient is not a county resident, the physician is not licensed in California, or there is not a therapeutic relationship between the patient and physician. The San Francisco Medical Cannabis ID Card Program applications are made available through the city’s Department of Public Health, where the registry is maintained, and also from clinics, doctor’s offices and medical cannabis organizations that have requested them. Applicants must bring a physician’s statement form, or form documenting that an oral recommendation was received, medical records release form, proof of identification and residence in San Francisco and the fee. For an applicant the fee is $25.00, plus $25.00 for each primary caregiver, up to a maximum of three caregivers. Registry cards are valid for up to 2 years, based on a physician’s recommendation. After verifying the application documents to its satisfaction, the Department returns the entire application package to the applicant, and issues cards to the applicant and caregivers. The department does not copy the materials, or keep the name of registrants. Information kept on file is limited to the serial number of the cards issued, the serial number of the identification card submitted, the date the registry card was issued, and when it expires. The Mendocino County Public Heath Department and the Sheriff’s office jointly run the County Pre-identification Program for county residents. The Health Department accepts the applicant’s Medical Marijuana Authorization forms, which includes patient and caregiver information, and a section for the physician to complete. The physician section requires checking “yes” or “no” to a recommendation, and the expiration length for the recommendation in months, years or for the patient’s lifetime. No condition information is requested. After verifying the physician recommendation, that section is destroyed, and the approved authorization sheet is sent to the Sheriff’s office. The Sheriff’s office interviews registrants and caregivers, requiring that they sign a declaration as to the caregiver’s role in patient care. Program identification cards with photographs of patients and caregivers are issued by the Sheriff’s office. In Sonoma County, the Sonoma County Medical Association, in conjunction with the Sonoma County District Attorney, developed a voluntary process for the medical association to provide peer review of individuals’ medical records and physician recommendations for medical use of marijuana. Based on the review, the patient’s physician is sent a determination regarding whether the patient’s case met criteria established regarding the patient-physician relationship, whether marijuana was approved of, and whether the condition is within the California state code allowing medical marijuana use. Upon receiving the determination from their doctor, patients decide whether to voluntarily submit the results to the District Attorney for distribution to the appropriate police department or to the sheriff’s office. According to the medical association director, some patients will go through the process but prefer to keep the letter themselves rather than have their name in a law enforcement database. Medical marijuana laws in California, Oregon, Hawaii and Alaska identify medical conditions or symptoms eligible for medical marijuana use, but do not specifically define the conditions or symptoms. The following descriptions are based on definitions in the Merriam Webster Medical Dictionary and selected other sources. Alzheimer’s Disease: Alzheimer’s is a brain disease that usually starts in late middle or old age. It is characterized as a memory loss for recent events spreading to memories for more distant events and progressing over the course of five to ten years to a profound intellectual decline characterized by impaired thought and speech and finally complete helplessness. Anorexia: Anorexia is a lack, or severe loss of appetite, especially when prolonged. Many patients develop anorexia as a secondary condition to other diseases. AIDS: Acquired Immune Deficiency Syndrome is a severe disorder caused by the human immunodeficiency virus, resulting in a defect in the cells responsible for immune response that is manifested by increased susceptibility to infections and to certain rare cancers. Arthritis: Arthritis refers to the inflammation of joints, usually accompanied by pain, swelling, and stiffness. Cachexia: Cachexia is a general physical wasting and malnutrition usually associated with chronic disease, such as AIDS or cancer. Cancer: Cancer is an abnormal growth that tends to grow uncontrolled and spread to other areas of the body. It can involve any tissue of the body and can have many different forms in each body area. Cancer is a group of more than 100 different diseases. Most cancers are named for the type of cell or the organ in which they begin. Crohn’s Disease: Crohn’s disease is a serious inflammatory disease of the gastrointestinal tract, it predominates in parts of the small and large intestine causing diarrhea, abdominal pain, nausea, fever, and at times loss of appetite and subsequent weight loss. Epilepsy: Epilepsy is a disorder marked by disturbed electrical rhythms of the central nervous system and typically manifested by convulsive attacks, usually with clouding of consciousness. Glaucoma: Glaucoma is a disease of the eye marked by increased pressure within the eyeball that can result in damage to the part of the eye referred to as the blind spot and if untreated leads to gradual loss of vision. HIV: Human Immunodeficiency Virus is a virus that reduces the number of the cells in the immune system that helps the body fight infection and certain rare cancers, and causes acquired immune deficiency syndrome (AIDS). Migraine: A migraine is a severe recurring headache, usually affecting only one side of the head, characterized by sharp pain and often accompanied by nausea, vomiting, and visual disturbances. Multiple Sclerosis: Multiple Sclerosis is a disease of the central nervous system marked by patches of hardened tissue in the brain or the spinal cord causing muscular weakness, loss of coordination, speech and visual disturbances, and associated with partial or complete paralysis and jerking muscle tremor. Nausea: Nausea refers to a stomach distress with distaste for food and an urge to vomit. Severe Nausea refers to nausea of a great degree. Pain: Pain refers to an unpleasant sensation that can range from mild, localized discomfort to agony. Pain has both physical and emotional components. The physical part of pain results from nerve stimulation. Pain may be contained to a discrete area, as in an injury, or it can be more diffuse, as in disorders that are characterized as causing pain, stiffness, and tenderness of the muscles, tendons, and joints. Severe pain refers to pain causing great discomfort or distress. Chronic pain is often described as pain that lasts six months or more and marked by slowly progressing seriousness. Spasticity: Spasticity is a condition in which certain muscles are continuously contracted. This contraction causes stiffness or tightness of the muscles and may interfere with gait, movement, and speech. Symptoms may include increased muscle tone, a series of rapid muscle contractions, exaggerated deep tendon reflexes, muscle spasms, involuntary crossing of the legs, and fixed joints. The degree of spasticity varies from mild muscle stiffness to severe, painful, and uncontrollable muscle spasms. Wasting Syndrome: A condition characterized by loss of ten percent of normal weight without obvious cause. The weight loss is largely the result of depletion of the protein in lean body mass and represents a metabolic derangement frequent during AIDS. Tanya Cruz, Christine Davis, Francisco Enriquez, Evan Gilman, and Monica Kelly made key contributions to this report. | A number of states have adopted laws that allow medical use of marijuana. Federal law, however, does not recognize any accepted medical use for marijuana and individuals remain subject to federal prosecution for marijuana possession. Debate continues over medical effectiveness of marijuana, and over government policies surrounding medical use. State laws in Oregon, Alaska, Hawaii, and California allow medical use of marijuana under specified conditions. All four states require a patient to have a physician's recommendation to be eligible for medical marijuana use. Alaska, Hawaii, and Oregon have established state-run registries for patients and caregivers to document their eligibility to engage in medical marijuana use; these states require physician documentation of a person's debilitating condition to register. Laws in these states also establish maximum allowable of marijuana for medical purposes. California's law does not establish a state-run registry or establish maximum allowable amounts of marijuana. Relatively few people had registered to use marijuana for medical purposes in Oregon, Hawaii, and Alaska. As of Spring 2002, 2,450 people, or about 0.05 percent of the total population of the three states combined, had registered as medical marijuana users. Statewide figures for California are unknown. In Oregon, Alaska, and Hawaii, over 70 percent of registrants were over 40 years of age, and in Hawaii and Oregon, the two states where gender information is collected, 70 percent of registrants were men. Statewide figures on gender and medical conditions were not available for Alaska or California. Hawaii and Oregon were the only two states that had data on the number of physicians recommending marijuana. As of February 2002, less than 1 percent of the approximately 5,700 physicians in Hawaii and 3 percent of Oregon's physicians out of 12,900 had recommended marijuana to their patients. Oregon was also the only state that maintained data on the number of times individual physicians recommended marijuana--as of February 2002, 62 percent of the Oregon physicians recommending marijuana made one recommendation. Data were not readily available to measure how marijuana-related law enforcement has been affected by the introduction of medical marijuana laws. Officials from over half of the 37 selected federal, state, and local law enforcement organizations GAO interviewed in the four states said that the introduction of medical marijuana laws had not greatly affected their law enforcement activities. |
USPS’s mail processing network consists of multiple facilities with different functions, as shown in a simplified version of this complex network in figure 1. USPS can receive mail into its processing network from different sources such as mail carriers, post offices, and mailing companies. Once USPS receives mail from the public and commercial entities, it processes and distributes the mail on automated equipment that cancels stamps and sorts bar coded mail. Once mail distribution has been completed by other operations, the mail is transported between processing and distribution facilities. Depending on the mail shape and classification, USPS processes the mail through different types of facilities that perform various functions. While mail is processed mainly through these facilities, mail processing operations also occur in other facilities, such as at annexes that are temporary facilities used as overflow for mail processing. In its June 2008 Network Plan, USPS determined that it will reexamine its mail processing network on an ongoing basis given changes in mail volume and outlined several initiatives to improve management of its mail processing operations, retail operations, and workforce to increase efficiency and reduce costs. With regard to its mail processing operations specifically, USPS identified three major initiatives to improve efficiency: (1) closing Airport Mail Center (AMC) operations, (2) transforming the Bulk Mail Center (BMC) network, and (3) consolidating AMP operations. USPS’s Network Plan also included criteria for evaluating decisions, the three most important of which were cost, service, and capacity. In September 2008, we reported that USPS took steps to address our prior recommendations to strengthen planning and accountability for its network initiatives, which was important as USPS began implementing them. However, we also found limited information on performance targets or on the costs and savings attributable to USPS’s various mail processing network initiatives. In the case of consolidating AMP operations, USPS revised its guidance on the process for AMP consolidations in March 2008. The revised guidance included key steps and time frames associated with them, as well as criteria to consider when making a decision to consolidate operations. The AMP Handbook does not provide guidance regarding how to identify potential opportunities for consolidation. In January 2010, the USPS OIG recommended that the Vice President of Network Operations develop and document specific criteria to identify consolidation opportunities, and USPS management agreed with this recommendation. In December 2009, USPS also updated the AMP Communication Plan, which supplements the AMP guidelines and provides specific guidance on communicating with stakeholders. USPS has realigned parts of its mail processing network and continues to seek additional opportunities to achieve its goal of creating an efficient and flexible network. For fiscal year 2009, USPS realized a cost savings of almost $30 million from eliminating all AMC operating functions and closing nine of these facilities and reorganizing the functions of the BMC to the Network Distribution Centers (NDC). Table 1 shows the status of USPS’s three major processing network initiatives intended to lower costs and achieve savings by reducing excess capacity and fuel consumption. Specific steps taken on the three major mail processing network initiatives are as follows: Elimination of AMC operating functions. Of its three major network initiatives, USPS has taken the most action by eliminating the AMC function and closing 9 AMC facilities. In the past decade, USPS has closed 68 of 80 AMC facilities. Located on airport property, AMC facilities primarily processed mail to expedite its transfer, to and from, up to 55 different commercial passenger airlines. Over time, USPS reduced the number of commercial airlines transporting mail from 55 to 7 and, from 2001 to 2007, the volume of mail transported by commercial airlines decreased by over 87 percent. At the same time, USPS contracted with air freight carriers to transport most of the mail requiring air transfer. In response, many AMC facilities made use of the available processing space by taking on additional processing functions typically handled by local processing and distribution centers (P&DCs), such as carrier and retail operations. In 2006, in an effort to eliminate redundancy and reduce costs, USPS began transferring functions performed at AMCs to nearby P&DCs or outsourcing these operations and, in September 2008, we reported that USPS estimated a targeted total savings of $117 million for closing these AMC facilities. Since our 2008 report, USPS has closed 9 AMC facilities, avoiding an estimated $12.2 million in costs. It has also revised the total cost savings to $113 million resulting from eliminating the AMC function and closing facilities, from fiscal year 2007 to fiscal year 2009. USPS officials told us that they plan to reclassify the 12 remaining facilities and determine whether some of them can be closed. Reorganization of BMC functions into NDC Network. USPS has reorganized the functions of its 21 BMCs into an NDC network with expanded functions that more efficiently use long-haul transportation and better align work hours with workload, according to the 2009 Updated Network Plan. Before the reorganization, all BMCs performed the same functions of processing local, destinating, and originating mail (e.g., Standard Mail®, Periodicals, and Package Services). In fiscal year 2009, USPS reorganized the BMC network, including renaming the facilities as NDC, to reflect the type of operations that are occurring at the facilities, according to USPS officials. The NDC network is divided into three tiers of facilities with different distribution and processing roles: Tier 1 NDC facilities process local and destinating mail; Tier 2 facilities process local, destinating, and originating mail; and Tier 3 facilities handle Tier 2 functions and consolidate less-than- truckload volumes of mail from Tier 2 facilities. As a result of the reorganization, USPS reduced the number of facilities processing originating mail from 21 to 10; the remaining 11 facilities continue to process local and destinating mail. According to officials, USPS completed the reorganization of the BMC functions to NDC in March 2010 and plans to further integrate other mail processing operations to NDC. USPS realized a cost savings of about $17.7 million for fiscal year 2009, with a projected cost savings of about $233.8 million from additional reorganization in fiscal years 2010 and 2011. According to officials, USPS also plans to integrate its Surface Transfer Center (STC) functions into the NDC network to further eliminate redundancy and move all mail traveling the same route through the same facilities. USPS officials told us they are currently identifying and assessing opportunities for consolidating STC functions into the NDC network; however, USPS has not established a definitive timeline as to when the functions of the STC are to be integrated into the NDC network because such integration depends on future mail volumes, space requirements and space availability, and necessary equipment. Consolidation of AMP operations and facilities. As shown in table 1, USPS has continued to initiate, review, and make decisions on AMP proposals to consolidate its operations and facilities. AMP proposals are intended to reduce costs and increase efficiency by making better use of excess capacity or underused resources, primarily at USPS’s P&DC facilities; the AMP proposals consist of consolidating all originating, destinating, or both types of operations, from one mail processing facility that downsizes its mail processing operations to other facilities nearby that gains the processing operations. While local and regional USPS management is responsible for conducting a feasibility study and developing an AMP proposal, USPS headquarters approves or disapproves the AMP proposal. Upon an approval from USPS headquarters, local and regional USPS management implements the consolidation of processing operations identified in an AMP proposal. According to USPS officials, the AMP initiative is an ongoing effort to identify opportunities to achieve efficiencies and, as such, USPS has not developed a program target for annual savings from AMP consolidations. As of March 2010, USPS was studying or reviewing 24 additional AMP proposals. (See app. I for a list of the AMP proposals under review.) On the basis of our analysis of 32 AMP proposals that were implemented, approved, or not approved since October 2008, USPS has followed the key steps in the AMP process. (See app. I for a list of the AMP proposals we reviewed.) As shown in figure 2, USPS has developed key steps for the AMP process, and it has established an overall goal of making an AMP decision within 5 months of the study being initiated. Our analysis found that USPS completed each step of the AMP process. It took about 6 months on average to complete the review process from initiating an AMP proposal to making a decision on 27 AMP proposals we analyzed. As shown in figure 3, 4 of the 27 AMP proposals we reviewed were completed in less than 5 months, while others took longer because of various factors, such as resolving conflicting interests from stakeholders and staffing issues. According to USPS officials, the time frames are goals to ensure the process moves forward, but USPS will take the time necessary to ensure that any issues that arise from an AMP proposal are resolved and appropriate decisions are made, even if doing so means going beyond the targeted 5-month time frame. For example, while USPS headquarters completed its review in June 2009 of consolidating the Dallas, TX, P&DC into the North Texas P&DC, the AMP proposal was not approved until December 2009 partially because the OIG was concurrently reviewing the AMP proposal in response to a congressional request. Many of the interim steps in the process conducted by the local and regional management also have time frames associated with them, such as studying the feasibility of an AMP proposal within a 2-month period. However, according to officials, USPS does not centrally track all the dates associated with the interim steps in the process because reviewing AMP proposals is an ongoing, iterative process with some steps occurring concurrently among local and regional USPS management and headquarters. An important part of the process is notifying and communicating with stakeholders, and USPS completed these steps as called for in its guidance. USPS is required to notify stakeholders, including employees, employee organizations, appropriate individuals at various levels of government, local mailers, community organizations, and the local media, as to when a feasibility study is initiated and when a final decision is made on the AMP proposal. According to its guidance, USPS must also provide stakeholders with available information about any service changes that may be affected from the proposed AMP consolidation and give ample opportunities for stakeholders to provide input on the AMP proposals. USPS is also required to conduct a public meeting after the local USPS management completes and forwards the feasibility study to regional and headquarters management for their review. We reported in 2008 that USPS had improved communication with stakeholders with regard to AMP proposals. In our analysis, we found that USPS consistently notified the stakeholders when a feasibility study was initiated and when a final decision was made; we also found that USPS consistently held public meetings and summarized public input for each AMP proposal we reviewed. Representatives of the postal unions we spoke with also commented that the USPS has been following the process and communicating with them and that the local union representatives generally attended the public meetings and were involved with the process. The last step in the AMP process is completion of two postimplementation reviews to assess the results of the consolidation. USPS has completed two reviews of the 32 AMP proposals we reviewed and is in the process of completing five more. The postimplementation reviews are intended to evaluate and measure the actual results of consolidation decisions, including realized savings in work hours, transportation, maintenance, and facility costs. In the first postimplementation review of the consolidation of the Kansas City P&DC in Kansas into the Kansas City P&DC in Missouri, USPS identified cost savings of about $22.3 million after the consolidation—$13 million more than its original projected savings of $9.3 million. USPS officials commented that several factors unrelated to the consolidation, such as the use of in-house maintenance employees rather than outsourced labor for facility projects and incentives for retirement in the fall of 2009, contributed to the larger than expected savings. Similarly, USPS identified cost savings of about $6.3 million in the first postimplementation review of the Canton P&DC consolidation with the Akron P&DC in Ohio—$4.1 million more than its original projected savings of $2.2 million. According to USPS officials, the original projections were made based on expected savings resulting from the consolidation. For both postimplementation reviews, additional savings have been realized in part because mail volume has continued to decline resulting in further reductions in work hours and transportation costs. Based on our analysis of 32 AMP proposals that USPS had decided on since October 2008, USPS consistently considered the criteria in its guidance when making its decisions. According to the AMP guidance, USPS must consider the following four criteria: impacts on the service standards for all classes of mail, issues important to local customers, impacts to USPS staffing, and savings and costs associated with moving mail processing operations. We also found that USPS has standardized its AMP data sources and analytical methodologies to achieve more consistent analysis when evaluating the criteria during the decision-making process. In addition, the OIG independently reviews data and the criteria USPS has used to validate the business cases for some AMP proposals. For instance, the OIG validated the business case for some of the AMP proposals we reviewed, including the consolidations of operations at Dallas P&DC into North Texas P&DC in Texas and New Castle processing and distribution facility (P&DF) into Pittsburgh P&DC in Pennsylvania. Additionally, the OIG concurred with the business decisions for consolidating mail processing operations at the Canton P&DC with the Akron P&DC in Ohio and Lakeland P&DC and Manasota P&DC with the Tampa P&DC in Florida. While USPS consistently evaluated these criteria, a stakeholder we spoke with commented that USPS does not provide a complete set of data it uses to make its decisions. Although USPS is not required to provide complete data that are used to consider AMP proposals under the AMP guidance, the stakeholder believed that more data transparency is needed to permit validation of USPS’s AMP decisions. According to USPS officials and USPS guidance, AMP proposals contain commercially sensitive information, and public disclosure of the information could cause competitive harm to USPS. Accordingly, sensitive data contained in AMP proposals is redacted. For the proposals we reviewed, we found that USPS assessed the impact that a consolidation would have on the service standards for all classes of mail and considered issues important to local customers. Two of the AMP proposals we reviewed—the consolidation of operations at Mansfield P&DF into Akron P&DC in Ohio and Zanesville Post Office into Columbus P&DC in Ohio—were not approved due to a potential downgrade in the delivery services for First-Class Mail®, despite potential cost savings for consolidating those facilities. In other instances, the AMP proposal was approved even though a downgrade in service for a particular class of mail was identified, such as Package Services, because an upgrade in delivery services of other mail classes was also identified, such as First-Class Mail®. According to USPS officials, it is the overall net effect of changes in delivery services that are considered in the decision-making process. In the case of considering issues important to local customers, USPS assessed whether the AMP proposal would impact customer service, such as any changes in mail pickup times, hours for business mail acceptance, and hours of retail operations. In many of the AMP proposals we reviewed, USPS forecasted that there would be no adverse impact on local customer service. USPS also forecasted that many of the retail hours at bulk mail entry units covered in the AMP proposals would not be changed. The impact that an AMP proposal would have on USPS staffing and estimating savings and costs associated with the consolidation are also important criteria in the AMP decision process. When considering the impact on staffing, USPS examined and estimated the potential number of positions that would be reduced or transferred to gaining facilities. This is a reduction in the number of positions that are allotted to a facility and not necessarily a loss of employees. Employees, who are impacted by the consolidation, are given positions in the gaining facility, or other facilities, in accordance to their respective collective bargaining agreements. USPS estimated a total reduction of 1,263 allotted positions for the AMPs we reviewed. In estimating potential costs and savings, USPS assessed work hour savings from staffing changes, savings associated with transportation and maintenance, as well as savings associated with space and leasing facilities. USPS also examined one-time costs associated with relocating staff, moving mail processing equipment, and changing facilities. If overall estimated cost savings were not identified, then the AMP proposal would not proceed. For example, while cost savings were identified in the AMP proposal to consolidate operations at Hattiesburg Customer Service Mail Processing Center with Gulfport P&DF in Mississippi, the proposal was not approved because one-time costs associated with moving mail processing equipment were not identified, and thus, the estimated total annual savings were insufficient. USPS estimated a total annualized cost savings of about $98.5 million for the 29 approved and implemented AMP proposals we reviewed. In 2005, we reported that because USPS did not have criteria to consider, or a process to follow, when making mail processing consolidation decisions, it was not clear whether the decisions would be made in a manner that is fair to all stakeholders or that is efficient and effective. As such, we recommended that USPS establish a set of criteria for evaluating consolidation decisions, develop a process for implementing these decisions that includes evaluating and measuring the results, and develop a mechanism for informing stakeholders as decisions are made. In 2008, we reported that USPS had made progress on implementing our prior recommendations: USPS established criteria for evaluating consolidation decisions, developed a process for evaluating and measuring the results of its AMP decisions, modified its AMP Communication Plan to improve public notification, engagement, and transparency, and clarified its process for addressing public comments. As stated earlier, we found that USPS followed its AMP process and consistently applied its criteria for evaluating AMP proposals that we reviewed. We provided a draft of this report to USPS for official review and comment. In response, USPS provided technical comments that we incorporated where appropriate. We are sending copies of this report to the Postmaster General, appropriate congressional committees, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions concerning this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Table 2 below lists Area Mail Processing (AMP) proposals under review by USPS, while Table 3 lists AMP proposals that we reviewed. In addition to the individual named above, Maria Edelstein, Assistant Director; Colin Fallon; Brandon Haller; Jennifer Kim; Jaclyn Nelson; and Crystal Wesco made key contributions to this report. | Deteriorating financial conditions and declining mail volume have reinforced the need for the U.S. Postal Service (USPS) to increase operational efficiency and reduce expenses in its mail processing network. This network consists of interdependent functions in nearly 600 facilities. USPS developed several initiatives to reduce costs and increase efficiency; however, moving forward on some initiatives has been challenging because of the complexities involved in consolidating operations. In response to a conference report directive, GAO assessed (1) the overall status and results of USPS's efforts to realign its mail processing network and (2) the extent to which USPS has consistently followed its guidance and applied these criteria in reviewing Area Mail Processing (AMP) proposals for consolidation since the beginning of fiscal year 2009. To conduct this assessment, GAO reviewed USPS's Network Plan, area mail processing consolidation guidance and proposals as well as other documents; compared USPS's actions related to consolidation of area mail processing facilities with its guidance, and interviewed officials from USPS, the USPS Office of Inspector General, and employee organizations. GAO provided USPS with a draft of this report for comment. In response, USPS provided technical comments that were incorporated where appropriate. USPS has realigned parts of its mail processing network since the beginning of fiscal year 2009 and continues to seek additional opportunities to achieve its goal of creating an efficient and flexible network and realize cost savings. Specifically, USPS: (1) eliminated all functions of the Airport Mail Centers, closed 9 of these facilities, and now uses the remaining 12 for other purposes, resulting in a realized cost savings of about $12.2 million in fiscal year 2009; (2) reorganized the functions of the 21 Bulk Mail Centers into newly developed Network Distribution Centers, resulting in a realized cost savings of about $17.7 million in fiscal year 2009; and (3) implemented 23 proposals to consolidate AMP operations and facilities and approved another 6 AMP consolidation proposals. USPS estimated an annual cost savings of about $98.5 million for the 29 approved and implemented AMP proposals. Additionally, USPS officials stated that they plan to integrate the Surface Transfer Center functions into the Network Distribution Center network to further eliminate redundancy in transporting mail. USPS has developed specific program targets for the ongoing reorganization efforts of the Network Distribution Centers and estimated a cost savings of about $233.8 million for fiscal years 2010 and 2011 from reduction in work hours and transportation costs. On the basis of GAO's analysis of 32 AMP proposals that were implemented, approved, or not approved since the beginning of fiscal year 2009, USPS has followed its realignment guidance by completing each step of the process and consistently applying its criteria in its reviews. GAO's analysis found that it took about 6 months on average--a month more than USPS's target of 5 months--to complete the review process from initiating an AMP proposal to making a decision. USPS officials noted the importance of the AMP decisions and the need to sometimes take longer than what the guidance suggests to ensure the correct decision. GAO also found that USPS consistently notified stakeholders when key steps of the AMP process were completed, such as when an AMP proposal was initiated, or public meetings were held. For each of the AMP proposals that GAO reviewed, USPS also consistently evaluated its four criteria related to AMP consolidations: (1) impacts on the service standards for all classes of mail, (2) issues important to local customers, (3) impacts to USPS staffing, and (4) savings and costs associated with moving mail processing operations. |
OMB’s 2000 standards replaced the 1990 standards for metropolitan areas. Some of the terms have changed, and appendix II provides an explanation of the key terms used in the new standards. OMB’s 2000 standards provide for the identification of the following statistical areas in the United States and Puerto Rico: metropolitan statistical area (which can be further divided into New England city and town area (including New England city and town area divisions); and combined New England city and town area. Determining the boundaries of metropolitan and micropolitan statistical areas starts with identifying the county(ies) or equivalent entity(ies), which contain the Census Bureau defined-urbanized area or urban cluster. This county is referred to as the “central county.” A metropolitan statistical area has at least one urbanized area of 50,000 or more, and a micropolitan statistical area has at least one urban cluster of at least 10,000 but less than 50,000 people. In addition, outlying counties are included in metropolitan and micropolitan statistical areas based on commuting ties with the central county. Some metropolitan statistical areas are very large, contain several counties, and cross state lines. If specified criteria are met, large metropolitan statistical areas can be subdivided into smaller geographic units called metropolitan divisions. Having subdivisions within a larger metropolitan statistical area provides data users with a set of statistics at a lower level of geography. Combined statistical areas are groupings of adjacent metropolitan and micropolitan areas. These combined statistical areas have social and economic ties as measured by commuting—but at lower levels when compared to counties within metropolitan and micropolitan statistical areas. Combined statistical areas provide data users with a broader perspective of how adjacent metropolitan and micropolitan areas are related. In view of the importance of cities and towns in New England, the 2000 standards also provide for a set of geographic areas that are defined using cities and towns in the six New England states. New England city and town areas use the same criteria as metropolitan and micropolitan statistical areas, metropolitan divisions, and combined statistical areas. The six states included in the New England region are Maine, Massachusetts, Rhode Island, Connecticut, New Hampshire, and Vermont. To address the first and second objectives, we reviewed Federal Register notices concerning the 2000 standards, compared the 2000 standards to the 1990 standards, and discussed the process and changes with OMB and Census Bureau officials. To address the third objective, we compared counties in metropolitan statistical areas as of June 30, 1999, using the 1990 standards, to counties in metropolitan and micropolitan statistical areas as of June 6, 2003, and February 18, 2004, using the 2000 standards. As part of that analysis, we identified the status—metropolitan, micropolitan, or not classified—for each of the counties in those listings. To provide detailed information on the changes that have occurred to counties’ metropolitan and micropolitan statistical area status from 1999 to 2003, we obtained Census Bureau maps for Michigan, New Mexico, and New York. We selected Michigan and New York because you asked us to look at changes that occurred as a result of the new standards in those two states; we selected New Mexico because it illustrates additional statistical coverage under the 2000 standards. We assessed the data reliability of the Census Bureau’s June 30, 1999, June 6, 2003, and February 18, 2004, county listings of metropolitan and micropolitan statistical areas by electronic testing for obvious errors in accuracy and completeness and by reviewing relevant documents, such as OMB bulletins, for those counties listed in metropolitan and micropolitan statistical areas. We determined that the data were sufficiently reliable for the purpose of our review. To address the fourth objective, we discussed with your staff which federal programs to review. The programs we agreed with your offices to review were (1) HUD’s Community Development Block Grant (CDBG) Program; (2) OPM’s Locality Pay Program for General Schedule Employees; (3) HHS’s Medicare’s payment system for hospital inpatients; and (4) HHS’s Ryan White CARE Act Program. To determine how the 2000 standards affected these programs, we reviewed relevant documentation, such as analysis performed by the program offices on the impact of new standards; attended public hearings; and interviewed agency officials overseeing these programs. We also compiled a list of federal programs that specify the use of metropolitan statistical areas for determining funding for federal programs, and the potential affect that the 2000 standards may have on funding for those programs. We compiled this list by performing a search of statutes that referred to “metropolitan statistical area(s)” or “MSA(s).” Our search was limited to the United States Code and was not intended to serve as an exhaustive list of federal programs that refer to metropolitan statistical areas. OMB’s process for developing the 2000 standards took more than a decade and involved extensive consultation with experts inside and outside government. (See fig. 1.) The process for developing the 2000 standards began even before OMB had published the 1990 standards. In 1989, OMB asked the Census Bureau to examine the concepts underlying the identification and definitions of metropolitan areas. The goal was to evaluate alternative approaches for identifying metropolitan areas that would take into account the variation in geographic settlement patterns across the United States. These patterns range from heavy concentrations of people to vast sparsely populated regions. The Bureau formed two working groups to research alternative approaches. The working groups found that assigning all territory outside of a metropolitan area to a single residual, nonmetropolitan area was no longer satisfactory. Based on these conclusions, the Census Bureau established agreements with four university working groups in 1991; each was to develop its own alternative for defining both metropolitan and nonmetropolitan areas. In 1995, these four alternatives were presented at a public conference hosted by the Council of Professional Associations on Federal Statistics (COPAFS), a group of professional associations, businesses, research institutes, and others interested in federal statistics that was established to promote an open dialogue with the federal statistical agencies. As the process for revising the standards moved along, there was also congressional interest in its progress. In July 1997, OMB and Census Bureau officials testified before the Subcommittee on Government Management, Information, and Technology, House Committee on Government Reform and Oversight, on the effort. In the fall of 1998, OMB chartered the interagency Metropolitan Area Standards Review Committee (MASRC), a group representing various statistical agencies within the federal government, and charged it with recommending revisions to the 1990 standards. In a December 21, 1998, Federal Register notice, OMB requested comments from the public on (1) the suitability of the current standards, (2) principles that should govern any proposed revisions to the standards, (3) reactions to the four approaches outlined in the notice, and (4) proposals for other ways to define metropolitan and nonmetropolitan areas. The issues posed in this Federal Register notice were also discussed at a second public conference hosted by COPAFS in January 1999. In addition, these issues were discussed extensively in two subsequent Federal Register notices, which solicited public comment on MASRC’s recommendations. On December 27, 2000, after considering public comments on MASRC recommendations, OMB released new standards for determining the boundaries of metropolitan and micropolitan statistical areas. In that notice of decision, OMB stated that it would apply the standards to geographic areas within the United States based on Census 2000 data and publish the results in 2003. On June 6, 2003, OMB published newly configured geographic areas. OMB also noted in the 2000 standards that it would designate new areas based on population updates from the Census Bureau, the first of which occurred on February 18, 2004. The 2000 standards differ from the 1990 standards in many ways, and the Census Bureau and OMB have stated that the new standards are simpler and more transparent than the previous standards. The 2000 standards define certain important concepts differently from those used in the 1990 standards. (See table 1.) One of the most notable differences between 1990 and 2000 is the introduction of a new designation for less populated areas—micropolitan statistical areas. These areas are comprised of a central county or counties with a Census Bureau-defined urban cluster of 10,000 to 49,999 population, plus adjacent outlying counties having a high degree of economic and social integration with the central county as measured through commuting. The 2000 standards also replaced the term primary metropolitan statistical areas (PMSA) with the term metropolitan division. OMB told us that that this change in terminology was a concern for many areas renamed metropolitan divisions. Long Island, New York, for example, expressed concern that the change in its designation, from part of a PMSA to a metropolitan division, might affect the distribution of federal funds to the area under programs that use the OMB standards. Although the criteria for PMSAs and metropolitan divisions are not identical because the population threshold increased, Long Island qualified as a metropolitan division, leaving its statistical status effectively unchanged. OMB provided additional guidance in its February 18, 2004, population update stating that if federal agencies were using PMSAs for program, administrative, and fund allocation purposes, the agencies should now consider using the metropolitan division definition, the comparable geographic unit established by the 2000 standards. Another change in the 2000 standards was in how an outlying county could link to a central county (or counties) and become part of a metropolitan or micropolitan statistical area. This process has been streamlined. Under the 2000 standards, an outlying county can be linked to a central county (or counties) if at least 25 percent of employed residents from the outlying county work in the central county (or counties) or at least 25 percent of the employment in the outlying county is accounted for by workers residing in the central county (or counties). This differs from the 1990 standards in which there were six scenarios for linking outlying counties to central counties, each of which included differing requirements for measures of settlement structure, such as population density and the percentage of the population that was urban. Two of the six scenarios also required a minimum amount of commuting of 15 percent. Thus, because the 2000 standards raised that amount to 25 percent and eliminated the other requirements of the 1990 standards, counties that had qualified under one of these two scenarios and had a minimum amount of commuting from 15 to 25 percent would no longer qualify as outlying counties. When OMB issued its June 6, 2003, newly configured statistical areas using the 2000 standards for the United States, five counties that had been part of metropolitan statistical areas, no longer qualified for this designation. An additional 41 counties that had been a part of metropolitan statistical areas are now components of micropolitan statistical areas. For a list of these counties see appendix IV. The 2000 standards also increased the number of counties, by over 900, across the country that attained status in statistical areas. (See figs. 2 and 3.) Using the 1990 standards as of the June 1999 population update, there were 847 counties within metropolitan statistical areas. Under the 2000 standards as of the February 2004 new area update, the number of counties within metropolitan statistical areas increased by 243 to 1,090, and an additional 690 counties were classified as being within micropolitan statistical areas. Appendix V provides the status—metropolitan, micropolitan, or not classified—of the counties listed in the February 2004 population update as of 1999 and 2003. These increases mean that the vast majority of Americans live in statistically recognized areas. Approximately 83 percent of the nation’s population live in metropolitan statistical areas and 10 percent live in micropolitan statistical areas. The 2000 standards resulted in statistical area changes in every state. Of the three states we examined, the change was greatest in New Mexico where the number of counties in statistical areas increased by approximately 250 percent. State maps highlighting changes in county status from 1999 to 2003 for New Mexico, New York, and Michigan are in appendix VI. See table 2 for the increase in the number of counties included in statistical areas from 1999 to 2003 for New Mexico, New York, and Michigan. Some federal agencies are required by statute to use metropolitan statistical areas to allocate program funds and implement other aspects of their programs. In other instances, federal agencies are not required to use metropolitan statistical areas by statute, but elect to do so. While OMB does not take into account or attempt to anticipate any of these nonstatistical uses in defining metropolitan or micropolitan statistical areas, it does advise agencies that use metropolitan areas for such nonstatistical purposes that the standards are designed for statistical purposes only and that changes to the standards may affect the implementation of their programs. More specifically, OMB urges agencies, organizations, and policymakers to carefully review program goals to ensure that appropriate geographic entities are used to determine eligibility and allocation of federal funds for these programs. We selected four federal programs that use metropolitan statistical areas to illustrate the impact of the change in standards. Those four programs are HUD’s CDBG Program, OPM’s Locality Pay Program for General Schedule Employees; HHS’s Medicare Hospital Reimbursement System; and HHS’s Ryan White CARE Act Program. As a result of changes in the standards in 2000, eligibility under CDBG has already expanded; eligibility under the locality pay program is expected to be expanded in January 2005; HHS anticipates under its proposal that hospital payments for fiscal year 2005 will be affected, but with no net increase in funding; and eligibility under the Ryan White CARE Program is unaffected by the standards because the boundaries for providing services are set by statute. HUD’s CDBG Program provides cities and counties with funds they can use to revitalize neighborhoods, rehabilitate housing, expand economic opportunities, and/or improve community facilities and services to benefit low- and moderate-income persons. Under current law, HUD uses designated metropolitan statistical areas, in part, to determine the eligibility of city and county governments to receive CDBG formula entitlement funds. According to HUD officials, OMB’s 2000 standards resulted in new eligibility for 60 cities that would provide them under the GDBG formula with a total of $36.2 million in CDBGs. In addition, 7 cities and 5 counties that became eligible under the 2000 standards would have become eligible under the previous 1990 standards. According to HUD, because the amount of money appropriated for the CDBG Program did not take expanded grantee eligibility into account, under the CDBG share formula all existing entitlement grantees saw a 1.2 percent reduction in their CDBG funding. In December 2003 interim regulations, HUD addressed the effect of the new OMB standards on CDBG entitlement grantee eligibility. Specifically, the replacement of the term central city by principal city had no effect on the status of then-eligible communities. HUD officials said that although principal cities within newly created micropolitan statistical areas would not qualify for CDBG funding, no city that had established eligibility for at least 2 years would lose its eligibility as a result of the new standards because current law allows such cities to retain their eligibility once they obtain it. Current law also allows urban counties that were eligible by fiscal year 1999 to retain their eligibility in future years. Under the Federal Employees Pay Comparability Act of 1990 (FEPCA), compensation for federal employees is adjusted depending on the local pay area where they work. FEPCA requires that federal pay rates be comparable with nonfederal pay rates for the same level of work within the same local pay area and that any pay disparities between federal and nonfederal employees be eliminated. To accomplish this, the Federal Salary Council (FSC) reviews and makes recommendations to the President’s Pay Agent (which consists of the Secretary of Labor and the Directors of OMB and OPM) on the locality pay program, including the establishment or modification of locality pay boundaries. There are 31 locality pay areas for which the geographic boundaries generally coincide with some of the metropolitan statistical areas set by the 1990 standards. In anticipation of the revised standards, the President’s Pay Agent had to decide whether and how to implement changes to the locality pay program. OPM issued regulations on behalf of the Pay Agent to retain use of the existing statistical areas for locality pay boundaries so that FSC and the President’s Pay Agent could review the new statistical areas and their possible use in locality pay. After that review, FSC recommended (and in December 2003 the Pay Agent tentatively endorsed) the following changes for pay: Locality pay areas will be based on 2003 metropolitan statistical areas, and where available, the combined statistical area—formed by combining adjacent metropolitan and micropolitan statistical areas. Micropolitan statistical areas will not be used unless they are part of a combined statistical area. As a result of implementing these recommendations, 76 counties (including some partial counties in New England) with a total of about 5,300 General Schedule employees will be added to existing locality pay areas. After considering public comments, OPM (on behalf of the Pay Agent) plans to issue final regulations on implementing the new locality pay areas January 2005. The Centers for Medicare and Medicaid Service’s (CMS) Medicare Hospital Inpatient Prospective Payment System (IPPS) pays hospitals a fixed fee depending on the diagnosis for each Medicare inpatient hospital stay. These fixed payments are adjusted to account for variations in labor costs across the country. The labor cost adjustment is based on a wage index calculated for specific geographic areas that reflects how average hospital wages in each geographic area compare to average hospital wages nationally. The geographic areas are intended to represent the separate labor markets in which hospitals compete for employees. Under the existing labor market definitions, each metropolitan statistical area is considered a single labor market, while the remainder of the areas for each state that is not in a metropolitan statistical area is treated as a single rural labor market. Generally, hospitals in non-metropolitan areas have lower wages than those in metropolitan areas and therefore receive lower Medicare payments. CMS has evaluated the 2003 statistical areas and the impact of using them on the hospital wage index. The agency announced last year that no changes would be implemented until fiscal year 2005. CMS issued a proposed rule addressing among other matters, the use of the new statistical areas for fiscal year 2005 on May 18, 2004, and expects to promulgate a final rule on August 1, 2004, after considering the public comments received. CMS has the discretion to use some or all of the 2003 statistical areas to calculate the wage adjustment to hospital inpatient payments. CMS has proposed to adopt the new and revised definitions of metropolitan statistical areas for wage index purposes and to continue to treat areas outside the metropolitan statistical area as statewide rural labor markets. Under the proposal, the new micropolitan areas would not be recognized for wage index purposes. CMS has also proposed to provide a transition period for hospitals that were included in metropolitan statistical areas under the previous definitions but are now part of a statewide rural area: these hospitals will receive the wage index for the metropolitan statistical area to which they previously belonged for three years. In its proposed rule, CMS estimates that adopting the new statistical area definitions would result in no net increase in federal spending for fiscal year 2005. CMS estimates that urban hospitals will gain slightly (0.1 percent) from the adoption of the new labor market areas, and rural hospitals will lose slightly (-0.2 percent) from the change. The Medicare IPPS also includes an administrative process in which a hospital meeting certain criteria can apply to be paid for services as if it were located in a higher-wage area. This geographic reclassification depends on a hospital’s proximity to the higher-wage area and its wages relative to the wages in that area. Beginning in fiscal year 2005, CMS will also establish an administrative process for increasing the wage index adjustment to recognize that hospital employees residing in one county may work in another area with a higher wage index. Title I of the Ryan White CARE Act makes federal funds available to eligible metropolitan areas (EMA) to assist in health care costs and support services for individuals and families affected by AIDS or HIV. Currently there are 51 EMAs. The 1996 reauthorization of the CARE Act made permanent metropolitan area boundaries that were used to determine fiscal year 1994 EMAs. The reauthorization also toughened the eligibility criteria, requiring that a metropolitan area have a population of 500,000 and more than 2,000 AIDS cases over the previous 5 years to qualify as an EMA. As a consequence, only two metropolitan statistical areas have become EMAs since 1996. This is due in part to a general decline in the number of AIDS cases. EMAs established by fiscal year 1996 retain their eligibility in future years regardless of whether they continue to meet the eligibility criteria; EMAs created after 1996 could become ineligible if they fail to meet the criteria. Officials of the Health Resources and Services Administration told us that no new areas qualify for eligibility status under the new standards. The officials also noted that the combination of fixed metropolitan area boundaries and generally declining numbers of AIDS cases means that EMA eligibility is likely to remain relatively static. After extensive consultations with experts within and outside the government, OMB issued a new set of statistical standards in December 2000 and used those standards to determine a new set of statistical areas in June 2003. In keeping with its long-standing position, OMB reiterated that the new metropolitan and micropolitan statistical areas are designed for statistical purposes. OMB’s process for reviewing and revising statistical areas was undertaken without regard to their effect on any federal programs (that is, for nonstatistical purposes). However, certain federal programs that use OMB’s statistical areas to distribute funds may be affected as a result of the new statistical areas. Therefore, OMB’s suggestion to agencies that they assess the new statistical areas to ensure they continue to be appropriate for such use seems reasonable. We are making no recommendations for executive action nor identifying any specific matters for congressional consideration. On June 2, 2004 we met with OMB and Census Bureau representatives to discuss the draft report. OMB and the Census Bureau agreed with our findings and conclusions and provided us with technical comments. These comments have been incorporated into the report where appropriate. We are sending copies of this report to other interested congressional committees, the Director of the Office of Management and Budget, the Secretary of Commerce, and the Director of the U.S. Census Bureau. Copies will be made available to others upon request. This report will also be available at no charge on GAO’s home page at http://www.gao.gov. If you or your staff have questions about this report, please call me at (202) 512-6737 or Thomas James, Assistant Director, at (202) 512-2996. Key contributors to this report were Bertha Dong, Robert Dinkelmeyer, Karin Fangman, Jerry Fastrup, Lisa Pearson, and Michael Rose. Authorizes pay adjustments for Senate employees in the Washington, D.C.-Baltimore, Maryland consolidated metropolitan statistical area (CMSA), which are equivalent to those adjustments made under 5 U.S.C. § 5304 (authorizing locality-based comparability pay). Borders of the D.C. and Baltimore MSAs have changed. Federal spending could change at the discretion of the President Pro Tempore of the Senate. Eligibility for loan repayment is based, in part, on being paid at a rate of pay that does not exceed a certain level, which is to include locality pay adjustments applicable to the Washington, D.C.-Baltimore, Maryland CMSA. Boundaries of the D.C. and Baltimore MSAs have changed. More Senate employees could qualify for loan forgiveness. 5 U.S.C. § 5304(f) provides Pay Agent with authority to provide for such pay localities as deemed appropriate. The establishment or modification of any boundaries shall be affected by regulation. Boundaries of several designated MSAs and CMSAs have changed. The number of federal employees subject to locality pay adjustments may change at the discretion of the President’s Pay Agent. The Federal Law Enforcement Pay Reform Act of 1990, Pub. L. No. 101-509, as amended, contains special pay adjustments for law enforcement officers in selected cities (statute lists seven CMSAs and one MSA). The number of employees subject to a special pay adjustment may be affected by MSA boundary changes. Term “eligible rural community” defined to exclude places located in MSAs. Communities formerly located outside an MSA may no longer qualify for loans or loan guarantees. 7 U.S.C. § 1991(a)(13)(D) defines (for purposes of 7 U.S.C. § 1926(a)(23) and 2008m) the term “rural area” to exclude MSAs except for territory within an MSA that is within a census tract having a population density of less than 20 persons per square mile. Boundary changes may affect which communities may qualify for participation in programs. 7 U.S.C. § 1991(a)(13)(E) defines (for purposes of 7 U.S.C. § 2009cc et seq) “rural area” to mean an area located outside an MSA or within a community that has a population of 50,000 or less. 7 U.S.C. § 6612(3) defines (for purposes of 7 U.S.C. § 6611 et seq.) the term “rural community” to include certain population areas of no more than 10,000, or certain counties not contained within an MSA. Boundary changes may make some areas ineligible. Potential affect on federal funding 10 U.S.C. § 2391(b)(3) limits assistance to communities with certain numbers of jobs lost. Requisite number of jobs lost is tied, in part, to MSA/non-MSA status of community. The eligibility of grantees may be affected by the population threshold of communities located in MSAs versus those located outside of MSAs. 12 U.S.C. § 1834a(b)(3)(C) establishes different population thresholds for a qualified distressed community based upon whether community is located within an MSA. May affect the population threshold of designated distressed communities. Depository institutions may apply for assessment credits for qualified activities (that include financing by various federal agencies). Eligibility is limited to institutions in “urban areas,” which 20 U.S.C. § 1139g(1) defines, in part, by reference to MSAs with a certain population threshold. MSA boundary changes may result in an MSA or adjacent MSAs meeting the 350,000 population threshold, qualifying institutions for participation. 26 U.S.C. § 42(d)(5) provides for determining the eligible basis of a building located in a qualified census tract or difficult development area. These areas are designated by the Department of Housing and Urban Development (HUD) and may not encompass more than a certain percentage of the population in the MSA or nonmetropolitan area. Eligible census tracts and difficult development areas (DDA) are limited to 20 percent of the population of MSAs or non-MSAs. If the limitation/cap is binding (more census tracts or DDAs would meet the eligibility criteria but for the limitation/cap) then a change in the MSA boundaries, due to the new standards for MSA designation, could affect the number of census tracts or DDAs that could be declared eligible for a special increase in tax credits. 26 U.S.C. § 143(e) and (f) establish purchase price and income requirements for an issue, which are calculated based upon statistical areas. Statistical areas are defined, in § 143(k)(2), by reference to MSAs and counties (or portions thereof) outside MSAs. MSA border changes will affect the median income threshold to qualify for tax exemptions. MSA border changes will also affect the calculation of cost/income ratios in high-cost areas, which may also affect the number of people qualifying for tax exemptions. 26 U.S.C. § 1391 provides, with regard to the permissible number of areas to be designated, that there be a prescribed mix of rural and urban areas. A rural area is generally defined in 26 U.S.C. § 1393(a) to mean any area outside of an MSA, and urban area is defined to mean an area that is not rural. Changing MSA boundaries affects whether a designated area is selected as urban or rural. Rural communities are subject to a lower population threshold. Therefore, a change in MSA status may affect eligibility of some communities. 26 U.S.C. § 1400E(a)(2) provides that of the permissible number of areas to be designated, a certain number must be in rural areas. Rural area is defined, in part, to mean an area outside of an MSA with a population of less than 50,000. Changing MSA boundaries affects whether a designated renewal community is selected as urban or rural. Rural communities are subject to a lower population threshold. Therefore, a change in MSA status may affect eligibility of some communities. Potential affect on federal funding 38 U.S.C. § 2033 requires the Department of Veterans Affairs to establish sites for the provision of comprehensive and coordinated services in at least each of the 20 largest metropolitan statistical areas. A previously unselected MSA has moved into the top 20 largest MSAs. Eligible organizations must have a defined service area of sufficient size that either includes an entire MSA or excludes any part of the area. MSA boundary changes may affect the status of a qualified organization if its service area no longer includes all of an MSA or does not include any part of the area. 42 U.S.C. § 294d(d) defines “rural” as encompassing geographic areas that are located outside of an MSA. Some designated rural health care agencies may be located in newly designated MSA counties. 42 U.S.C. § 1395m(l)(9) provides an increased transitional fee schedule rate for ambulance services (furnished on or after July 1, 2001, and before January 2004) originating in a rural area, which is defined, in part, by reference to MSAs. MSA boundary changes may affect some originating sites. 42 U.S.C. § 1395m(m)(4) provides for payment of facility fees for telehealth services (services provided via a telecommunications system) only when originating site is located in a designated rural health professional shortage area or a county that is not included in an MSA, or is a site participating in a telemedicine demonstration project as of December 31, 2000. 42 U.S.C. § 1395w-23(d) defines the payment area for Medicare + Choice to mean a county or equivalent area specified by the Secretary of Health and Human Services. 42 U.S.C. § 1395w-23(d)(3) authorizes geographic adjustments of these payment areas upon request by the state, including using a metropolitan- based system in which all the portions of each MSA in the state are treated as individual Medicare+Choice payment areas. All areas in the state that do not fall within an MSA are treated as single Medicare+Choice payment areas. MSA boundary changes may affect MSAs meeting minimum population threshold requirements. 42 U.S.C. § 1395ww(d) addresses inpatient hospital service payments based on prospective rates (including adjustments). Under §1395ww(d), the labor component of the standardized payment amount must be adjusted to account for the geographic variation in hospitals’ labor costs. This requirement is implemented by the Centers for Medicare and Medicaid Services by calculating a wage-related cost adjustment on an MSA/non-MSA basis. Changes in MSA boundaries could affect prospective payment rates. Potential affect on federal funding 42 U.S.C. § 1397f(a)(1) provides that each state is entitled to two grants for each qualified empowerment zone in the state, and one grant for each qualified enterprise community in the state. 42 U.S.C. § 1397f(a)(2)(A) bases the amount of a grant for a qualified empowerment zone on whether it is designated urban or rural. 42 U.S.C. § 1397f(f) defines rural to mean any area outside of an MSA and urban to mean an area that is not rural. MSA boundary changes affect whether empowerment and enterprise zones are designated urban or rural, which, in turn, affects funding amounts. 42 U.S.C. § 1479(f)(8) defines “colonia” to mean an identifiable community that (1) is in Arizona, California, New Mexico, or Texas; (2) is in the area of the United States within 150 miles of the border between the United States and Mexico, except that the term does not include any MSA that has a population exceeding 1,000,000; (3) is determined to be a colonia on the basis of objective criteria, including lack of decent, safe, and sanitary housing; and (4) was in existence as a colonia before November 28, 1990. MSA boundary changes may affect colonias located in an MSA with a population exceeding 1,000,000. 42 U.S.C. § 1490 defines the terms “rural” and “rural area” to mean any open country or any place, town, village, or city that (1) is not part of or associated with an urban area and has a population of from 10,000 to 20,000; (2) is not contained within an MSA; and (3) has a serious lack of mortgage credit for lower- and moderate-income families. MSA boundary changes may affect places losing population and falling below the 20,000 population threshold required for designation as a rural area. 42 U.S.C. § 5306 provides for the allocation and distribution of funds to entitlement communities (metropolitan cities and urban counties) and to states for use in nonentitlement areas. HUD will use the new MSA definitions to distribute funding to entitlement communities that meet various population thresholds. Under 42 U.S.C. § 5302(a)(4), “metropolitan city” means either a central city within an MSA or any other city within an MSA with a population of 50,000 or more. Under 42 U.S.C. § 5302(a)(6)(A), “urban county” means any county within an MSA that is authorized (under state law) to undertake essential community development and housing assistance activities and exceeds certain population thresholds. 42 U.S.C. § 5302(a)(7) defines “nonentitlement area” to mean an area that is not a metropolitan city or part of an urban county and does not include Indian tribes. Section 916(e)(4) defines “U.S.-Mexico border region” to mean the area of the United States within 150 miles of the border between the United States and Mexico, except that the term does not include any MSA that has a population exceeding 1,000,00. MSA boundary changes may affect the set-aside for colonias if a colonia is located in an MSA whose population has risen above 1,000,000. These colonias may no longer qualify for participation in the set-aside. 42 U.S.C. § 11371 defines “metropolitan city” and “urban county” by reference to definitions used for the CDBG Program (42 U.S.C. § 5302). 42 U.S.C. § 11373 provides for the allocation and distribution of assistance to states, metropolitan cities, and urban counties in the same fashion funds are allocated under the CDBG Program. Distribution of grants is made using the CDBG formula that is affected by changes in MSA boundaries. Defines the terms “rural area” and “rural community” to mean (1) an area outside an MSA or (2) an area within an MSA but located in a rural census tract. MSA boundary changes could result in some areas losing eligibility unless they are located in designated rural census tracts. 42 U.S.C. § 12704 defines “metropolitan city” and “urban county” by reference to definitions used for the CDBG Program (42 U.S.C. § 5302). Links to the CDBG Program definitions for eligible entities, in which changes in MSA boundaries affect the distribution of federal financial assistance. 42 U.S.C. 12741 et seq. establishes the HOME Investment Partnership Program (allocation section at 12747). Eligibility of first-time homebuyers seeking assistance depends on not exceeding income ceiling, which is measured relative to the MSA or non-MSA median income. Median family income will be affected by MSA boundary changes and will affect the number of eligible homebuyers. Eligibility of homebuyers seeking to buy in an enterprise zone depends on their having family incomes that are not more than the median income in the MSA in which the enterprise zone is located. Grants are used to fund loans to homebuyers of eligible housing units in enterprise zones. Since median family income in MSAs is used to determine eligible homebuyers, the change in MSA boundaries may affect which households are eligible for loans under the program. Grant funds to states and cities are allocated based on number of AIDS cases and population thresholds using MSA and non-MSA boundaries. The largest city in an MSA with an over 500,000 population and more than 1,500 AIDS cases is eligible to receive funding under the program. Any new MSA meeting these criteria would become eligible and may affect funding for existing grantees. Potential affect on federal funding 42 U.S.C. § 13239 authorizes low-interest loans to fleets utilizing alternative fuels. 42 U.S.C. § 13211(9) defines “fleet” to mean, in part, vehicles used primarily in MSAs or CMSAs. Boundary changes may affect entities eligible for interest rate subsidies. “Federal fleet” is defined, in part, to mean vehicles located in an MSA or CMSA. Newly designated MSA may contain a federal fleet that will become subject to alternative fueled vehicle requirements. This entry reflects amendments made by Pub. L. No. 108-173, the Medicare Prescription Drug, Improvement, and Modernization Act of 2003. Core based statistical area (CBSA): A statistical geographic entity consisting of the county or counties associated with at least one core (urbanized area or urban cluster) with a population of at least 10,000, plus adjacent counties having a high degree of social and economic integration with the core as measured through commuting ties with the counties containing the core. Metropolitan and micropolitan statistical areas are the two categories of CBSAs. Metropolitan statistical area: A CBSA associated with at least one urbanized area that has a population of at least 50,000. The metropolitan statistical area comprises the central county or counties containing the core, plus adjacent outlying counties having a high degree of social and economic integration with the central county as measured through commuting. Micropolitan statistical area: A CBSA associated with at least one urban cluster that has a population of at least 10,000, but less than 50,000. The micropolitan statistical area comprises the central county or counties containing the core, plus adjacent outlying counties having a high degree of social and economic integration with the central county as measured through commuting. Principal city: The largest city of a CBSA, plus additional cities that meet specified statistical criteria. Central county: The county or counties of a CBSA containing a substantial portion of an urbanized area or urban cluster or both and to and from which commuting is measured to determine qualification of outlying counties. Outlying county: A county that qualifies for inclusion in a CBSA on the basis of commuting ties with the CBSA’s central county or counties. Metropolitan division: A county or group of counties within a CBSA that contains a core with a population of at least 2.5 million. A metropolitan division consists of one or more main/secondary counties that represent an employment center or centers, plus adjacent counties associated with the main county or counties through commuting ties. Combined statistical area: A geographic entity consisting of two or more adjacent CBSAs with employment interchange measures of at least 15. Pairs of CBSAs with employment interchange measures of at least 25 combine automatically. Pairs of CBSAs with employment interchange measures of at least 15, but less than 25, may combine if local opinion in both areas favors combination. Employment interchange measure: A measure of ties between two adjacent entities. The employment interchange measure is the sum of the percentage of employed residents of the smaller entity who work in the larger entity and the percentage of employment in the smaller entity that is accounted for by workers who reside in the larger entity. New England city and town area (NECTA): A statistical geographic entity that is defined using cities and towns as building blocks and that is conceptually similar to the CBSAs in New England (which are defined using counties as building blocks). NECTA division: A city or town or group of cities and towns within a NECTA that contains a core with a population of at least 2.5 million. A NECTA division consists of a main city or town that represents an employment center, plus adjacent cities and towns associated with the main city or town, or with other cities and towns that are in turn associated with the main city or town through commuting ties. Urban area: The generic term used by the U.S. Census Bureau to refer collectively to urbanized areas and urban clusters. Urban cluster: A statistical geographic entity defined by the Census Bureau for Census 2000, consisting of a central place(s) and adjacent densely settled territory that together contain at least 2,500 people, generally with an overall population density of at least 1,000 people per square mile. For purposes of defining CBSAs, only those urban clusters with populations of 10,000 or more are considered. Urbanized area: A statistical geographic entity defined by the Census Bureau, consisting of a central place(s) and adjacent densely settled territory that together contain at least 50,000 people, generally with an overall population density of at least 1,000 people per square mile. 2000 metropolitan and micropolitan statistical area standards Metropolitan statistical areas based around at least one Census Bureau defined urbanized area of 50,000 or more population, and micropolitan statistical areas, based around at least one urban cluster of 10,000 to 49,999 population. A metropolitan statistical area with a single core of at least 2,500,000 population can be subdivided into component metropolitan divisions. Collectively, the metropolitan and micropolitan statistical areas are termed Core Based Statistical Areas (CBSAs). Metropolitan statistical areas based on total populations of at least 1,000,000 (level A), 250,000 to 999,999 (level B), 100,000 to 249,999 (level C), and less than 100,000 (level D), respectively. Metropolitan statistical areas of 1,000,000 or more population can be designated as consolidated metropolitan statistical areas if local opinion is in favor and component primary metropolitan statistical areas can be identified. Counties and equivalent entities throughout the U.S. and Puerto Rico. City and town based areas, conceptually similar to the county-based areas, provided for the New England states. Counties and equivalent entities throughout the U.S. and Puerto Rico, except in New England, where cities and towns are used to define metropolitan areas. County-based alternative provided for the New England states. Census Bureau defined urban area of at least 10,000 population and less than 50,000 population for micropolitan statistical area designation. Census Bureau defined urbanized area of at least 50,000 for metropolitan statistical area designation. City of at least 50,000 population, or Census Bureau defined urbanized area of at least 50,000 population in a metropolitan area of at least 100,000 population. Any county in which at least 50% of the population is located in urban areas of at least 10,000 population, or that has within its boundaries a population of at least 5,000 located in a single urban area of at least 10,000 population. Any county that includes a central city or at least 50% of the population of a central city that is located in a qualifier urbanized area. Also any county in which at least 50% of the population is located in a qualifier urbanized area. 2000 metropolitan and micropolitan statistical area standards Commuting ties: At least 25% of the employed residents of the county work in the central county/counties of a CBSA; or at least 25% of the employment in the county is accounted for by workers residing in the central county/counties of the CBSA. Combination of commuting and measures of settlement structure: 50% or more of employed workers commute to the central county/counties of a metropolitan statistical area and: 25 or more persons per square mile (ppsm), or at least 10% or 5,000 of the population lives in a qualifier urbanized area; OR 40% to 50% of employed workers commute to the central county/counties of a metropolitan statistical area and: 35 or more ppsm, or at least 10% or 5,000 of the population lives in a qualifier urbanized area; OR 25% to 40% of employed workers commute to the central county/counties of a metropolitan statistical area and: 35 ppsm and one of the following: (1) 50 or more ppsm, (2) at least 35% urban population, (3) at least 10% or 5,000 of population lives in a qualifier urbanized area; OR 15% to 25% of employed workers commute to the central county/counties of a metropolitan statistical area and: 50 or more ppsm and two of the following: (1) 60 or more ppsm, (2) at least 35% urban population, (3) population growth rate of at least 20%, (4) at least 10% or 5,000 of population lives in a qualifier urbanized area; OR 15% to 25% of employed workers commute to the central county/counties of a metropolitan statistical area and less than 50 ppsm and two of the following: (1) at least 35% urban population, (2) population growth rate of at least 20%, (3) at least 10% or 5,000 of population lives in a qualifier urbanized area; OR at least 2,500 of the population lives in a central city located in a qualifier urbanized area of a metropolitan statistical area. A county that qualifies as outlying to two or more CBSAs is included in the area with which it has the strongest commuting tie. If a county qualifies as outlying to two or more metropolitan areas, it is assigned to the area to which commuting is greatest; if the relevant commuting percentages are within 5 points of each other, local opinion is considered. Merging statistical areas Two adjacent CBSAs are merged to form one CBSA if the central county/counties (as a group) of one CBSA qualify as outlying to the central county/counties (as a group) of the other. If a county qualifies as a central county of one metropolitan statistical area and as an outlying county on the basis of commuting to a central county of another metropolitan statistical area, both counties become central counties of a single metropolitan statistical area. 2000 metropolitan and micropolitan statistical area standards Principal cities include the largest incorporated place with a population of 10,000 or more or, if no incorporated place of at least 10,000 is present, the largest incorporated place or census designated place in the CBSA AND each place of at least 250,000 population or in which at least 100,000 persons work AND each place with a population of at least 50,000, but less than 250,000 in which employment meets or exceeds the number of employed residents AND each place with a population that is at least 10,000 and 1/3 the size of the largest place, and in which employment meets or exceeds the number of employed residents. Central cities include the largest city in a metropolitan statistical area/consolidated metropolitan statistical area AND each city of at least 250,000 population or at least 100,000 workers AND each city of at least 25,000 population and at least 75 jobs per 100 workers and less than 60% out commuting AND each city of at least 15,000 population that is at least 1/3 the size of largest central city and meets employment ratio and commuting percentage above AND the largest city of 15,000 population or more that meets employment ratio and commuting percentage above and is in a secondary noncontiguous urbanized area AND each city in a secondary noncontiguous urbanized area that is at least 1/3 the size of largest central city in that urbanized area and has at least 15,000 population and meets employment ratio and commuting percentage above. 2000 metropolitan and micropolitan statistical area standards Metropolitan divisions consist of one or more counties within metropolitan statistical areas that have a single core of 2.5 million or more population. A county is identified as a main county of a metropolitan division if 65 percent or more of its employed residents work within the county and the ratio of its employment to its number of employed residents is at least .75. A county is identified as a secondary county of a metropolitan division if 50 percent or more, but less than 65 percent, of its employed residents work within the county and the ratio of its employment to its number of employed residents is at least .75. A main county automatically serves as the basis for a metropolitan division. For a secondary county to qualify as the basis for forming a metropolitan division, it must join with either a contiguous secondary county or a contiguous main county with which it has the highest employment interchange measure of 15 or more. Primary metropolitan statistical areas outside New England consist of one or more counties within metropolitan areas that have a total population of 1 million or more. Specifically, these primary metropolitan statistical areas consist of: (A) One or more counties designated as a standard metropolitan statistical area on January 1, 1980, unless local opinion does not support continued separate designation. (B) One or more counties for which local opinion strongly supports separate designation, provided one county has: (1) at least 100,000 population; (2) at least 60 percent of its population urban; (3) less than 35 percent of its resident workers working outside the county; and (4) less than 2,500 population of the largest central city in the metropolitan statistical area. (C) A set of two or more contiguous counties for which local opinion strongly supports separate designation, provided at least one county also could qualify as a primary metropolitan statistical area in section (B), and (1) each county meets requirements (B)(1), (B)(2), and (B)(4) and less than 50 percent of its resident workers work outside the county; (2) each county has a commuting interchange of at least 20 percent with the other counties in the set; and (3) less than 35 percent of the resident workers of the set of counties work outside the area. After all main counties and secondary counties have been identified and grouped (if appropriate), each additional county that already has qualified for the metropolitan statistical area is included in the metropolitan division associated with the main/secondary county to which the county at issue has the highest employment interchange measure. Counties within a metropolitan division must be contiguous. Each county in the metropolitan area not included within a central core under sections (A) through (C), is assigned to the contiguous primary metropolitan statistical area to whose central core commuting is greatest, provided this commuting is: (1) at least 15 percent of the county’s resident workers; (2) at least 5 percentage points higher than the commuting flow to any other primary metropolitan statistical area central core that exceeds 15 percent; and (3) larger than the flow to the county containing the metropolitan area’s largest central city. If a county has qualifying commuting ties to two or more primary metropolitan statistical area central cores and the relevant values are within 5 percentage points of each other, local opinion is considered. 2000 metropolitan and micropolitan statistical area standards Two adjacent CBSAs are combined if the employment interchange rate between the two areas is at least 25. The employment interchange rate is the sum of the percentage of employed residents of the CBSA with the smaller total population who work in the CBSA with the larger total population and the percentage of employment in the CBSA with the smaller total population that is accounted for by workers residing in the CBSA with the larger total population. Adjacent CBSAs that have an employment interchange rate of at least 15 and less than 25 may combine if local opinion in both areas favors combination. The combining CBSAs also retain separate recognition. Two adjacent metropolitan statistical areas are combined as a single metropolitan statistical area if: (A) the total population of the combination is at least one million and (1) the commuting interchange between the two metropolitan statistical areas is equal to at least 15% of the employed workers residing in the smaller metropolitan statistical area, or equal to at least 10% of the employed workers residing in the smaller metropolitan statistical area and the urbanized area of a central city of one metropolitan statistical area is contiguous with the urbanized area of a central city of the other metropolitan statistical area or a central city in one metropolitan statistical area is included in the same urbanized area as a central city in the other metropolitan statistical area; AND (2) at least 60% of the population of each metropolitan statistical area is urban. (B) the total population of the combination is less than one million and (1) their largest central cities are within 25 miles of one another, or the urbanized areas are contiguous; AND (2) there is definite evidence that the two areas are closely integrated economically and socially; AND (3) local opinion in both areas supports combination. Titles of CBSAs include the names of up to three principal cities in order of descending population size. Titles of metropolitan statistical areas include the names of up to three central cities in order of descending population size. Local opinion is considered under specified conditions. Titles of metropolitan divisions include the names of up to three principal cities in the metropolitan division in order of descending population size. If there are no principal cities, the title includes the names of up to three counties in the metropolitan division in order of descending population size. Titles of primary metropolitan statistical areas include the names of up to three cities in the primary metropolitan statistical area that have qualified as central cities. If there are no central cities, the title will include the names of up to three counties in the primary metropolitan statistical area in order of descending population size. Titles of combined statistical areas include the name of the largest principal city in the largest CBSA that combines, followed by the names of up to two additional principal cities in the combination in order of descending population size, or a suitable regional name, provided that combined statistical area title does not duplicate the title of a component metropolitan or micropolitan statistical area or metropolitan division. Local opinion will be considered when determining the titles of combined statistical areas. Titles of consolidated metropolitan statistical areas include the names of up to three central cities or counties in the consolidated metropolitan statistical area. The first name will be the largest central city in the consolidated metropolitan statistical area; the remaining two names will be the first city or county name that appears in the title of the remaining primary metropolitan statistical area with the largest total population and the first city or county name that appears in the title of the primary metropolitan statistical area with the next largest total population. Regional designations can be substituted for the second and third names if there is strong local support. Granbury, TX Micro Area Oak Harbor, WA Micro Area A portion of this New England county was a component of the 1999 metropolitan area shown. Where this occurred, it may be that other portion(s) of the county were included in more than one 1999 metropolitan area and/or another portion of the county was not included in any 1999 metropolitan area. Under the 1999 classification, metropolitan areas in the six New England states—Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont—were town- and city-based, not county- based. The 2003 classification is county-based, so the entire county is included in the micropolitan area shown. . Minneapolis-St. Paul-Bloomington, MN-WI MSA Baraboo, WI Micro Area Whitewater, WI Micro Area Milwaukee-Waukesha-West Allis, WI MSA Milwaukee-Waukesha-West Allis, WI MSA Wisconsin Rapids-Marshfield, WI Micro Area Laramie, WY Micro Area Gillette, WY Micro Area Riverton, WY Micro Area Sheridan, WY Micro Area Rock Springs, WY Micro Area Jackson, WY-ID Micro Area Evanston, WY Micro Area Broomfield County, Colorado was formed from parts of Adams, Boulder, Jefferson, and Weld Counties, Colorado on November 15, 2001. For purposes of defining and presenting data for MSAs, Broomfield city is treated as if it were a county in 1990 and in 2000. A portion of this New England county was a component of the 1999 metropolitan area shown. Where this occurred, it may be that other portion(s) of the county were included in more than one 1999 metropolitan area and/or another portion of the county was not included in a 1999 metropolitan area. Under the 1999 classification, metropolitan areas in the six New England states – Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont – were town- and city-based, not county- based. The 2003 classification is county-based, so the entire county is included in the metropolitan or micropolitan area shown. Pursuant to Pub. L. No. 100-202, Section 530, the part of Sullivan city in Crawford County, Missouri, was added to the St. Louis, Missouri-Illinois MSA effective December 22, 1987. . To view color versions of these maps (figs. 4-6), go to our Web site. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | For the past 50 years, the federal government has had a metropolitan area program designed to provide a nationally consistent set of standards for collecting, tabulating, and publishing federal statistics for geographic areas in the United States and Puerto Rico. Before each decennial census, the Office of Management and Budget (OMB) reviews the standards to ensure their continued usefulness and relevance and, if warranted, revises them. While designed only for statistical purposes, various federal programs use the statistical areas to determine eligibility and to allocate federal funds. OMB advises agencies to carefully review program goals to ensure that appropriate geographic entities are used in making these decisions. GAO was asked to examine the process used for developing the OMB standards issued in 2000 and their effects on certain federal programs. Specifically, GAO agreed to report on (1) the process used to develop the 2000 standards, (2) how the 2000 standards differed from the 1990 standards, (3) how the application of the standards affected the geographic distribution of counties into statistical areas, and (4) the effect of standards on the eligibility and funding allocations for four federal programs. The new standards for federal statistical recognition of metropolitan areas issued by OMB in 2000 differ from the 1990 standards in many ways. One of the most notable differences is the introduction of a new designation for less populated areas--micropolitan statistical areas. These are areas comprised of a central county or counties with at least one urban cluster of at least 10,000 but fewer than 50,000 people, plus adjacent outlying counties if commuting criteria is met. The 2000 standards and the latest population update have resulted in five counties being dropped from metropolitan statistical areas, while another 41counties that had been a part of a metropolitan statistical area have had their statistical status changed and are now components of micropolitan statistical areas. Overall, the 2000 standards have resulted in changes in every state and nationwide statistical coverage has increased. Under the 1990 standards, 847 counties were in metropolitan statistical areas. Now, there are 1090 counties in metropolitan statistical areas and 690 counties in micropolitan statistical areas. Of the four federal programs GAO reviewed to determine the impact of the 2000 standards, eligibility under one has expanded; eligibility under another is expected to expand in January 2005; the agency overseeing another anticipates under its proposal that program payments for fiscal year 2005 will be affected, but with no net increase in funding; and eligibility under another program is unaffected because the geographic boundaries used to determine eligibility are set by statute. For example, the standards have resulted in new eligibility in fiscal year 2004 for 60 cities to receive a total of $36.2 million in Community Development Block Grants, which provide funds to revitalize neighborhood infrastructure. This funding increase required a 1.2 percent funding cut for all other grantees because a cut is required to offset increases due to expanded eligibility. |
As federal employees plan for their eventual retirement from government service, they often consider many financial and lifestyle issues. Agency- provided retirement education is generally the primary source of the information that employees need to plan for these issues before they retire. Retirement benefits represent an important portion of total federal compensation and employees often cite these benefits as a primary reason for staying in government service. Thus, agencies also benefit from sponsoring retirement education programs, which allow them to capitalize on their comparative advantage in competitive labor markets as well as invest in the government's human capital. The Federal Employees’ Retirement System Act of 1986 (FERSA) granted the Office of Personnel Management (OPM) and federal agencies broad authority to design and implement retirement education programs for employees covered by the two largest federal civilian retirement programs—the Civil Service Retirement System (CSRS) and the Federal Employees’ Retirement System (FERS). Specifically, FERSA authorizes agencies to designate retirement counselors who are responsible for providing employees with benefits information, and mandates that OPM establish a training program for these agency retirement counselors.FERSA also created the Federal Retirement Thrift Investment Board to administer the Thrift Savings Plan (TSP). The Thrift Board provides training and information on TSP to agency personnel offices and groups of employees upon agency request; however, it is not responsible for providing retirement education for the federal workforce. CSRS, which was established in 1920, currently includes an annuity and TSP. CSRS’ annuity predates the Social Security system by several years. When the Social Security system was established, Congress decided that employees in CSRS would not be covered by Social Security through their federal employment. Starting in 1987, employees covered by CSRS may also contribute up to 5 percent of their salary to TSP; however, they receive no government contributions. CSRS was closed to new entrants after December 31, 1983, and, according to OPM actuaries, is estimated to end in about 2070, when all covered employees and survivor annuitants are expected to have died. FERS was implemented in 1987 and generally covers those employees who first entered federal service after 1983. The primary impetus for the new program was the Social Security amendments of 1983, which required all federal employees hired after December 1983 to be covered by Social Security. Thus, FERS includes Social Security, an annuity that is smaller than that provided under CSRS, and TSP. The government automatically contributes an amount equal to 1 percent of salary to TSP accounts for all employees covered by FERS, regardless of whether those employees make any voluntary contributions to their accounts. In addition, employees covered by FERS may contribute up to 10 percent of their salaries, up to the current legal maximum of $10,000, and receive government matching contributions on the first 5 percent. At the beginning of fiscal year 1998, CSRS and FERS covered about 2.7 million employees, or 93 percent of the civilian workforce, including U.S. Postal Service employees. As of fiscal year 1995, FERS covered slightly more federal employees than CSRS. In response to the request of Senator Carl Levin, in his former capacity as Ranking Minority Member of the Subcommittee on International Security, Proliferation and Federal Services, Senate Committee on Governmental Affairs, our objectives in preparing this report were to provide information on what OPM officials and retirement experts view as the recommended content, presentation formats, and timing of retirement education programs and OPM’s and agencies’ retirement education roles, responsibilities, and practices in the context of these recommendations. Because of time and resource constraints, we limited the scope of our review to the education provided to employees covered by CSRS and FERS, who represent the majority of federal civilian employees. To identify OPM’s views on the recommended content, presentation formats, and timing of a retirement education program, we interviewed OPM officials and reviewed OPM’s published guidance on how agencies are to design and implement federal retirement education programs. To identify retirement experts’ views, we interviewed a judgmentally selected group of 15 retirement experts using a structured interview that had been pretested and provided in advance. The experts also responded to a close- ended questionnaire. We used a summary of the experts’ responses as our principal basis for identifying the recommended content, presentation formats, and timing of a retirement education program. In summarizing the experts’ responses to the close-ended questionnaire, we used a super- majority criterion (i.e., agreement on the part of 10 or more experts) to classify a list of 21 potential topics, or content, as (1) essential; (2) recommended, but not essential; or (3) optional. Specifically, we identified a topic as “essential” when 10 or more experts responded that the topic was essential. If the topic did not meet the criterion for being essential, we identified it as “recommended” when 10 or more experts responded that the topic was either essential or recommended. Similarly, if the topic did not meet the criteria for being essential or recommended, we identified it as “optional” when 10 or more experts responded that the topic was essential, recommended, or optional. To identify candidates who had the appropriate background and experience to serve as retirement experts, we solicited and received nominations from the following eight associations and organizations that specialize in retirement and/or financial planning issues: the American Association of Retired Persons, the Employee Benefit Research Institute, the International Association for Financial Planning, the International Foundation of Employee Benefit Plans, the National Association of State Retirement Administrators, the National Conference of Public Employee Retirement Systems, the Pension Research Council, and the Teachers Insurance and Annuity Association. For each candidate nominated, we reviewed the biographical information provided by the nominating organization(s). We selected 16 individuals who each had extensive experience with pension or retirement issues and specific expertise on retirement education. The selected experts collectively represented a breadth of professional backgrounds in both the public and private sectors, including academics, unions, financial planning, pension administration, advocacy, financial services, and human resource management consulting. We invited each of the selected candidates to share their views on retirement education, and 15 agreed to do so. Appendix I provides more information on the experts with whom we consulted. To identify OPM’s and agencies’ retirement education roles, responsibilities, and practices in the context of the recommendations on program content, presentation formats, and timing, we interviewed officials representing OPM, the Thrift Board, and 12 randomly selected federal agencies that had 1,000 or more employees and whose headquarters were located within the Washington, D.C., metropolitan area. We used a structured interview that had been pretested and provided to the 12 agencies in advance. We also analyzed documents and data provided by the agencies’ officials. We used a summary of the agencies’ practices as the principal basis for comparing the actual practices of the 12 agencies with the recommended content, presentation formats, and timing identified by OPM officials and the experts. We did not independently verify agencies’ responses regarding the specifics of the content, performance formats, and timing of their retirement education programs. Thus, although we used terms such as “provided” and “sponsored” to describe agencies’ practices, we were generally referring to what agencies told us they did. To develop the sample of agencies for our review, we used information from the spring 1997 Central Personnel Data File (CPDF)—an automated information system that contains individual records for most federal civilian employees and is maintained by OPM. The list of agencies used in selecting this sample included 68 organizations that represented a total of 1,682,391 federal employees who were covered by CSRS or FERS. We stratified the 68 organizations according to size (1,000 to 9,999 employees; 10,000 to 99,999 employees; and 100,000 or more employees) and randomly selected 4 agencies from each group. For the Department of Defense (DOD), our list of 68 organizations included only the Departments of the Army, Air Force, and Navy. On this basis, we selected the following 12 agencies for review: the International Trade Administration and National Oceanic and Atmospheric Administration (NOAA) of the Department of Commerce; the Bureau of Reclamation of the Department of the Interior; the Internal Revenue Service (IRS), U.S. Customs Service, and U.S. Secret Service of the Department of the Treasury; the Health Resources and Services Administration (HRSA) and the National Institutes of Health of the Department of Health and Human Services (HHS); the Department of Housing and Urban Development (HUD); the Veterans Health Administration (VHA) of the Department of Veterans Affairs (VA); and the Departments of the Navy and Air Force of DOD. The sampled agencies employed about 42 percent of the employees covered by CSRS or FERS from our sampling universe. As agreed, our analysis did not address the effectiveness of OPM's administration of federal retirement education, agencies’ programs, or the retirement education that individual federal employees might receive. Also, we did not attempt to independently validate the information provided to us by OPM and the 12 agencies. Although we audited the reliability of CPDF data for fiscal year 1996 and found it sufficiently reliable for most governmentwide analyses, we did not update that audit.However, we are not aware of changes in the way that agencies submit or OPM processes CPDF data that would materially affect the reliability of the data. We used a random sample to have an objective, unbiased sample. However, as a consequence of our small sample size, the retirement education practices described in this report are not generalizable to all agencies that employ 1,000 or more employees and have headquarters in the Washington, D.C., metropolitan area. We are reporting solely on the practices of those agencies we surveyed. We did our review in Washington, D.C., from January 1998 to February 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Director of OPM; the Secretaries of the Department of Commerce, DOD, HHS, HUD, the Interior, the Treasury, and VA; the Commissioner of Internal Revenue; or their designees. OPM and Commerce provided written comments. DOD’s and IRS’ comments were provided orally by the audit liaison and legislative affairs officer, respectively. These agencies’ comments are presented at the ends of chapters 2 and 3, and OPM’s written comments are reprinted in appendix II. HHS, HUD, the Interior, the Treasury’s Customs Service and Secret Service, and VA said they had no comments on the draft report. OPM and the experts with whom we consulted held generally consistent views regarding the recommended content, presentation formats, and timing of retirement education programs. OPM provided guidance to federal agencies on CSRS and FERS administration in its CSRS and FERS Handbook for Personnel and Payroll Offices, benefits administration letters, and other advisory documents. OPM’s guidance presented various recommendations regarding the design and implementation of agency retirement education programs. The retirement experts with whom we consulted also provided recommendations regarding the content, presentation formats, and timing of a retirement education program. Although the terminology used by OPM and the experts was not identical, we considered the substance of their recommendations regarding content, presentation formats, and timing to be generally consistent. For example, OPM and the experts agreed that new employees need basic information on their retirement system’s characteristics, all employees need financial planning information on a periodic basis during their careers, and employees nearing retirement need transition planning information. Table 2.1 summarizes OPM’s and the experts’ views regarding the content and timing of agency-provided retirement education programs. OPM’s views regarding the design and implementation of agencies’ retirement education programs were reflected in the guidance and support it provided to agencies. While allowing agencies to exercise broad flexibility in designing and implementing their retirement education programs, OPM recommended that agencies include certain key topics or content, present information through various formats, and educate employees throughout their careers. The CSRS and FERS Handbook served as the principal vehicle for communicating OPM’s guidance, and OPM updated that guidance on a periodic basis through handbook revisions and benefits administration letters sent directly to the agencies. OPM’s guidance recommended that federal agencies consider including certain content as part of their retirement education programs. OPM’s recommendations were not intended to be exhaustive and agencies were not required to include them in their retirement education programs. OPM’s recommended topics included the following: plan type, including whether an employee is covered by CSRS or FERS; eligibility, including minimum age and service requirements for employees to (1) participate in the plan and (2) retire with full benefits; employer and employee contributions allowed or required under CSRS or voluntary contribution program; financial planning, including various investment strategies; military or prior civilian service deposits, including whether an employee has prior service for which a deposit or redeposit is owed and the effects of payment or nonpayment on an annuity; TSP withdrawal options, including when a retiree may begin withdrawing TSP savings as well as the monetary advantages and tax effects of the various withdrawal options; annuity estimates; divorce or separation, including the potential effect of divorce or separation agreements on retirement benefits; designating a beneficiary, including the cost and amount of survivor benefits as well as spousal eligibility for benefits; retaining health and life insurance benefits in retirement; cost-of-living adjustments (COLA), including how retirement benefits will be adjusted periodically for inflation depending on CSRS or FERS coverage; and Social Security and Medicare, including whether employees are covered by these programs and how the programs integrate with their other benefits. OPM recommended that agencies include written, interactive, and electronic formats as part of their retirement education programs. For example, OPM recommended that agencies use formats such as pamphlets and brochures, periodic workshops and seminars, Intranet/Internet Web sites, and recorded telephonic information in their retirement education programs. According to OPM, agencies that use multiple educational formats are likely to increase the number of employees that they reach through their retirement education program. OPM recommended that agencies provide employees with retirement information at various stages of their career, including: early career, 5 years before retirement eligibility, 1 year before retirement eligibility, 6 months before retirement, and 2 months before retirement. OPM also recommended that agencies cover certain topics with employees throughout their careers and periodically update information about any changes occurring to federal retirement programs or benefits. Table 2.1 summarizes OPM’s recommendations on when agencies may wish to introduce topics to employees. OPM recommended that agencies identify and invite employees to attend a preretirement seminar within about 5 years before their retirement eligibility and about 1 year before their actual planned retirement. Moreover, OPM believed that agencies should contact employees within 1 year of retirement eligibility and offer those employees one-on-one counseling. Consistent with OPM’s guidance, the retirement experts with whom we consulted recommended specific content, presentation formats, and timing that they considered essential for a retirement education program. A super majority (at least 10 of 15) of the experts considered 13 topics to be essential to a retirement education program, while they identified 6 topics as recommended, but not essential, and 2 topics as optional. The experts identified the following 13 topics as being essential to a retirement education program: plan type, including whether an employee is covered by CSRS or FERS; participation and vesting requirements, or the amount of time that employees must work before they are eligible to (1) contribute to and (2) own, or become “vested” in, accrued benefits of their plan; employer and employee contributions that are allowed and/or required; estimated assets needed to retire that reflect individual employee’s desired retirement date, income level, and lifestyle; investment alternatives and strategies, including information on the association between investment risk and return, the benefits of saving earlier rather than later, and the importance of diversification across different types of investment vehicles; debt management that provides employees with information on how to manage limited resources efficiently and enhance their ability to save; tax considerations, including the benefits of saving with pretax versus retention of agency-provided health and life insurance benefits; minimum voluntary retirement dates; projected benefit amounts and COLA’s; disability and survivor insurance, including how these programs are integrated with their other retirement benefits and any associated costs to employees; Social Security and Medicare, including whether employees are covered by these programs, how the programs are integrated with their other retirement benefits, and any associated costs to employees; and Medigap and long-term care insurance, that is, insurance designed to provide coverage for medical costs not covered by Medicare or other federal health insurance. The experts also identified the following six topics as recommended, but not essential, for a retirement education program: health maintenance, both before and after retirement; early or deferred retirement options, including circumstances under which employees would be eligible to receive reduced retirement benefits (1) earlier than the minimum voluntary retirement date or (2) later than the time of actual separation from an agency; deciding when and whether to retire; withdrawal options, such as taking accrued benefits as an annuity versus as a lump-sum payment; postretirement employment, including information on starting a new career or working part-time; and inheritance planning, including the preparation of wills and other methods of transferring estates to survivors. Finally, the experts identified the following two topics as optional components of a retirement education program: relocation, including whether and where employees might wish to relocate planning for increased leisure time. The experts believed that agencies should avail themselves of a broad range of presentation formats in their retirement education programs. For example, agencies could distribute written guidance, such as brochures and newsletters; present information more interactively by sponsoring seminars, workshops, or one-on-one counseling sessions; and/or provide information upon request by establishing electronic systems, such as Intranet/Internet Web sites and recorded telephonic response systems. The experts believed that each presentation format has its advantages and disadvantages. Moreover, no one format would be optimal for communicating with all employees, because individual learning styles vary. The experts also believed that each individual employee’s need for information on a specific retirement education topic at any given point in their career is influenced by multiple demographic factors, including their age, marital status, knowledge of financial planning concepts, years until they are eligible or plan to retire, and health status. Thus, agencies are challenged with designing a retirement education program that can meet the needs of all their employees over their entire careers. The experts recommended that agencies focus on their employees’ needs when selecting which presentation formats to include in their programs. To address individual employee learning styles and content needs, the experts recommended that agencies design their retirement education programs to include multiple and interactive formats to the extent possible. Specifically, they viewed one-on-one counseling and seminars as the optimal methods of presenting retirement education. Although these options represent the most costly methods of providing such information, the experts told us that both formats allow agencies to expose employees to a broad range of topics that employees then can pursue further on an as- needed basis. Moreover, employees benefit from being able to get direct and immediate responses to any questions they may have. The experts told us that one-on-one counseling represents the most customized source of information for employees; however, seminars allow for group interactions that may enrich the information available to employees. To better meet the individual content needs of different employees, the experts recommended that agencies choosing to use seminars or workshops should do so by offering customized sessions for specific groups, or segments, of their workforce. For example, agencies might provide seminars that are targeted to employees at different career stages, such as early career, midcareer, and preretirement. Agencies then could target their content to include those topics that are most relevant to the attending group of employees. This approach would also provide employees with the opportunity to attend seminars periodically throughout their careers. The experts told us that written materials also play an important role in retirement education. These materials, which can be provided in paper or on electronic Web sites, allow agencies to provide consistent and detailed information to all employees in a cost-efficient way. Employees can use such reference materials as often as they like and at their convenience. However, many of the experts with whom we consulted did not recommend that agencies rely on written materials as their primary presentation format because employees may too readily ignore, file, or throw away such materials. In particular, the experts said that younger employees may regard information on retirement planning as something to which they need not devote much attention. The experts recommended that agencies introduce many of the topics identified as essential early within employees’ careers. The experts also recommended that agencies update their employees on this information on a regular basis throughout their careers—approximately once every 1 to 5 years. The table at the beginning of this chapter (see table 2.1) summarizes the experts’ recommendations regarding the content that agencies may wish to present at various times in employees’ careers. The experts recommended that agencies introduce basic plan information to employees within their first year of employment. Additionally, the experts recommended that agencies update employees regularly (i.e., continuously or at least once a year) on many of the topics that the experts identified as essential, recommended, or optional after the topics have first been introduced. The experts also recommended that agencies introduce information on minimum retirement dates to employees more than 5 years before they are eligible for full retirement benefits and information on postretirement employment, relocation, and planning for increased leisure time late in employees’ careers. The experts told us that all employees need information early and often during their careers, regardless of whether they are covered by CSRS or FERS. However, the focus or content of agency-provided information to employees may need to be tailored to address the unique aspects of each retirement system. For example, the experts told us that it is particularly important for employees covered by FERS to understand the level of allowed contributions to their TSP accounts, the amounts of agency matching contributions that are available, the risk and investment returns associated with each available investment alternative, and the benefits generally associated with beginning to contribute to TSP early in one’s career. While employees’ decisions have a limited impact on the amount of their future annuities from CSRS and FERS, employees may benefit from receiving information early in their careers on such topics as the future projected value of their annuities, vesting requirements, and available withdrawal options. Employee decisions made with or without information on such topics could affect the amount of an employee’s future retirement benefits. OPM, Commerce, DOD, and IRS agreed with our findings. In its written comments (see app. II), OPM added that it was gratified that there is agreement among our retirement experts, OPM, and agencies on the makeup of retirement education programs. OPM said it was working continually to improve the quality and comprehensiveness of benefits information employees receive and that our findings would be very useful in its efforts to enhance the products and services it makes available to agencies. IRS similarly indicated agreement with OPM’s and our experts’ recommendations and said that it would consider them in contemplating whether improvements could be made regarding the education provided early within employees’ careers. OPM and the agencies we surveyed both played a role in providing retirement education to federal employees covered by CSRS and FERS. As part of its governmentwide responsibility for federal retirement systems, OPM supplemented the guidance it provided to agencies on the design and implementation of retirement education programs by developing educational materials, sponsoring training, and providing technical advice to agencies’ benefits personnel. Agencies, which had primary responsibility for developing retirement education programs, generally provided information to employees on topics such as the basic features of CSRS and FERS and financial planning issues for retirement, which were recommended by OPM and the retirement experts with whom we consulted. The agencies distributed this information to employees using a variety of written, interactive, and electronic presentation formats that were available throughout employees’ careers, also as recommended by OPM and the experts. In addition to providing agencies with guidance on how to design and implement their retirement education programs (see ch. 2), OPM also provided educational materials and other support to agencies’ benefits officers and federal employees. Specifically, OPM developed educational materials that updated agencies on any changes in the law or regulations affecting retirement programs and that agencies could distribute directly to federal employees as part of their programs. OPM also supported agencies by sponsoring training and providing technical assistance to resolve case-specific issues for benefits staff. OPM published retirement education materials that agencies could distribute to federal employees or use as guidance in developing their own customized program materials. These materials included brochures and pamphlets as well as videos and CD-ROM programs that provided detailed information on federal retirement programs, such as retirement eligibility requirements, annuity formulas, TSP contribution limits, requirements for maintaining health and life insurance in retirement, and survivor benefits. Agencies and employees could also access OPM’s Web site for retirement information and links to other related Web sites, such as the Thrift Board’s site for TSP participants. Although OPM indicated in its guidance that supplying retirement education to employees is primarily an agency role, officials told us that they supported agencies’ efforts in these ways to help agencies cope with increased workloads and to allow agencies’ staff to devote more time to such activities as providing one-on-one counseling. For example, during the 1998 open season, when employees covered by CSRS could elect to transfer to FERS, OPM provided agencies with detailed information on the specifics of each retirement program, frequently asked questions and answers for individuals considering whether to transfer to FERS, and a computer model that allowed agencies to project what an individual’s benefits might be, given different scenarios. Consistent with statutory requirements, OPM also supported agencies’ retirement education programs by providing training for benefits officers on a periodic basis. Specifically, OPM sponsored quarterly meetings of the interagency network for retirement and insurance, an annual Fall Festival of Training, an annual benefits officer conference, and other training courses on an as-needed basis throughout the year, all of which provided agencies’ personnel with both training and networking opportunities. In support of agencies’ retirement counseling services, OPM provided expert advice and assistance on specific technical issues or cases. OPM officials told us that they have also provided direct support to certain agencies during times of unusual requirements, such as when OPM staff helped to facilitate the delivery of federal retirement and insurance benefits to those employees and survivors affected by the Oklahoma City bombing in 1995. At the time of our review, officials told us that OPM was developing a benefits service center that would augment agencies’ retirement education programs by providing benefits officers and individual employees with customized benefits and retirement information and counseling. Most of the agencies that we surveyed indicated that OPM was effective and timely in communicating retirement information and benefits changes to a great or very great extent. Moreover, OPM officials told us that they conducted a customer satisfaction survey in fiscal year 1998 that included all agencies’ human resources directors and a sample of agencies’ benefits officers. They told us that the results of this survey indicated that agencies generally rated OPM guidance materials as excellent and were highly satisfied with OPM’s efforts to share information and provide technical assistance. The retirement education programs of the agencies we surveyed generally included those topics recommended by OPM and the experts with whom we consulted. For example, agencies’ officials told us that they included information on the basic features of CSRS and FERS, financial planning for retirement, and maintaining federal health and life insurance in retirement. Agencies also provided information to employees on whether and/or how Social Security would contribute to their retirement benefits, particularly for those employees who were covered by FERS. Officials said that agencies provided retirement planning information, but not advice, regardless of the topics included. Agencies we surveyed provided their employees with information on a variety of topics related to the basic features of CSRS and FERS. For example, agency materials that we reviewed typically included information on participation and vesting requirements for both the annuity and TSP components of each retirement system, required and voluntary contributions made by agencies and/or employees, minimum age and service requirements for full retirement benefits, as well as survivor and disability insurance benefits. In addition to this descriptive information on federal retirement benefits, the agencies also typically provided information that their employees could use to plan for their future retirements. For example, agencies commonly provided employees with information on their projected future benefits, tools for determining what level of assets might be needed in retirement, and general investment strategies for accumulating additional assets if desired. Because federal employees covered by CSRS and FERS are eligible for continued health and life insurance benefits in retirement, agencies we surveyed emphasized the importance of maintaining these benefits in their retirement education programs. For example, the agencies informed employees that they generally must be enrolled in the federal health and life insurance benefits programs for the full 5 years immediately preceding their retirement to qualify for these benefits. The agencies also provided information on how employees could provide these benefits for their survivors if they so choose. Agencies’ officials told us that they also included information in their retirement education programs on how Social Security is integrated with federal annuity and TSP benefits. This information is particularly important to those employees covered by FERS, because Social Security represents one of the three components of their retirement plan. Agencies likewise provided information on Social Security to employees covered by CSRS, because a portion of these employees may also be eligible for full or reduced Social Security benefits on the basis of their spouses’ work histories, work they did before joining the federal workforce, and/or work they plan on doing following their retirement from federal service. Consistent with OPM and expert recommendations, the officials representing the agencies we surveyed told us that they used a variety of presentation formats in their retirement education programs, including written publications, interactive formats such as seminars and one-on-one counseling, and electronic formats such as Web sites and automated systems. Agencies we surveyed used numerous publications, such as brochures and newsletters, to provide detailed information to employees on their retirement plans and issues to consider in planning for their retirement. Although a few agencies generated some of their own customized materials, the agencies we surveyed generally used written materials made available by OPM or the Thrift Board. According to the agencies’ officials, these materials were convenient and high-quality sources of information for employees. Agencies also used Web sites to make many of these publications more readily available. Agencies’ officials said that they supplemented their written reference materials by using more interactive formats, in particular, seminars and one-on-one counseling. Agencies offered seminars to expose employees to information on a wide variety of topics, which employees could then individually pursue in more detail as needed or desired. When employees requested one-on-one counseling sessions, agencies provided employees with highly customized retirement planning information, including benefits decisions that needed to be made at retirement and the specific steps needed to apply for retirement. To ensure that employees received expert information on a wide range of topics, agencies we surveyed generally contracted out for seminars. However, the agencies did not contract for one-on-one counseling. Agencies’ officials told us that their staff were best able to provide counseling to employees, because they had access to employees’ personnel records, were well-informed on the inherent complexities of the federal retirement programs, and were in a position to take personnel actions on behalf of employees, if necessary. Agencies we surveyed also used a variety of electronic media to further distribute retirement education to their employees, including videos, telephone response systems, Intranet/Internet Web sites, and computer simulation models. For example, several agencies’ officials told us that they videotaped their retirement seminars (1) to make these sessions available to geographically dispersed employees who might otherwise be unable to attend and/or (2) allow employees to view the seminars multiple times at their convenience. The agencies also commonly provided retirement information using Web sites that included links to other federal sources of retirement information, including OPM, the Thrift Board, and the Social Security Administration. The Air Force, IRS, and HUD also used a centralized and automated call center to provide retirement information to geographically dispersed employees in a manner that they considered to be consistent and cost efficient. Each of these agencies used an interactive system that allowed employees to access a variety of personnel information, including retirement education, by calling a toll-free telephone number. In addition to prerecorded information, employees could reach a benefits counselor who had access to individual personnel records and could provide answers to specific questions. Agencies’ officials said that these centralized and more automated systems were developed in response to downsizing that resulted in the agencies having fewer personnel staff available to provide retirement education to employees. Other agencies, including HRSA and VHA, told us that they were considering adopting a similar approach. OPM officials believed that such systems are likely to become more common across the federal service. Consistent with OPM and expert recommendations, the agencies we surveyed made retirement education available continuously throughout employees’ careers. Agencies’ officials told us that they view retirement education as a shared responsibility between the agencies and employees. That is, agencies were responsible for making such information readily available; however, employees were also responsible for determining when and how often to seek this information. Agencies’ officials told us that they provided brochures and other written retirement education materials to employees early in their careers as a part of new employee orientations. Written materials were then provided periodically on an as-needed basis. For example, agencies’ officials told us that they provided their employees with revised publications during the 1998 CSRS to FERS open season. The agencies’ officials also told us that their payroll offices mail annual benefits statements to employees that contain information on benefits earned to-date and their projected future value at the time of retirement eligibility. Agencies also provided publications on a self-serve basis using centralized benefits resource centers/libraries and/or posting these documents on their retirement Web sites. All of the agencies we surveyed sponsored retirement seminars that were designed for employees who were approximately within 5 years of being eligible to retire. However, several agencies’ officials told us that employees who had more than 5 years before becoming eligible were also allowed to attend these seminars, space permitting. Moreover, five of the surveyed agencies (i.e., the Air Force, NOAA, the Bureau of Reclamation, HRSA, and Customs) sponsored separate midcareer seminars that were designed to address topics most relevant to employees with approximately 15 years of federal service. These agencies’ officials told us that they provided these additional seminars because they felt that attending a seminar for the first time at 5 years before retirement might be too late to allow some employees to fully prepare for retirement when they first become eligible. Thus, many federal employees had the option of taking more than one retirement seminar during their careers. Finally, the agencies we surveyed made retirement education available to employees throughout their careers using a variety of other formats, including the Web sites and automated information systems we previously discussed. All of the agencies we surveyed told us that one-on-one counseling was available to employees at any point in their careers upon request. OPM, Commerce, DOD, and IRS agreed with our findings. In its written comments (see app. II), OPM said it believes very strongly that employees should receive information about their benefits regularly throughout their careers so that retirement is simply the culmination of a long planning process. OPM also commented that it is very important to make information available in a variety of ways to meet the varying needs of both employing agencies and their employees. IRS said that it is currently delivering preretirement and ongoing education programs that generally include the information recommended by OPM and our retirement experts, and that it may consider whether improvements could be made to the education provided to employees early in their careers. | Pursuant to a congressional request, GAO reviewed the retirement education that the Office of Personnel Management (OPM) and agencies provide to federal civilian employees covered by the Civil Service Retirement System (CSRS) or the Federal Employees' Retirement System (FERS). GAO noted that: (1) OPM and the experts with whom GAO consulted held generally consistent views regarding the recommended content, presentation formats, and timing of retirement education programs; (2) they believed that these programs should provide employees with information on certain topics, or content such as plan features and financial planning, and that other agencies should consider using multiple formats so as to accommodate employees' varying needs; (3) they also believed that such information should be provided early and throughout employees' careers; (4) OPM provided guidance to agencies on the design and implementation of retirement education programs and supplemented the guidance with educational materials, training, and technical advice for agencies' benefits staff; (5) agencies had primary responsibility for designing and implementing their programs according to their agency-specific needs; (6) the retirement education programs of the agencies reviewed generally included those topics recommended by OPM and the experts; (7) in providing retirement education, agencies' officials said that they made information available on a variety of topics, including the specific features of CSRS and FERS, the requirements for maintaining federal health and life insurance benefits in retirement, and financial planning for retirement; (8) agencies' officials told GAO that they used a wide variety of presentation formats to communicate retirement education to their employees; (9) all of the agencies that GAO reviewed provided employees with written educational materials that were supplemented with interactive seminars and one-on-one counseling; (10) agencies provided retirement planning information, but not advice, regardless of the presentation format used; (11) agencies' officials also said that they generally provided retirement education to employees during their initial orientation and throughout their careers; (12) all of the agencies in GAO's review sponsored seminars designed for those employees who were nearing retirement eligibility; (13) some agencies also sponsored additional seminars that were specifically designed for employees who had approximately 15 years of federal service to encourage employees to begin planning for their retirement earlier in their careers; (14) agencies also provided one-on-one counseling at any time upon request; and (15) agencies believed that retirement education is a shared responsibility between agencies and employees, and that employees must ultimately decide for themselves whether or when to seek retirement information. |
One of DOD’s goals is to prepare its combat units for wartime operations by providing units with the most realistic training possible. DOD operates and maintains hundreds of training ranges located throughout the country. Its combat units use training areas located in a wide variety of climates and include the full scale of training terrains, such as ocean areas, desert and mountainous regions, and jungle-like environments, which provide DOD combat units the opportunity to train in environments they will most likely operate in once deployed for wartime operations. These training areas also encompass critical habitat and are home to a variety of endangered species. Like other federal, state, local, and private facilities, DOD installations are generally required to comply with environmental and other laws that are intended to protect human health and the environment from harm. However, several environmental statutes include a national security exemption that DOD may invoke to ensure the requirements of those statutes would not restrict military training needs that are in the paramount interest of the United States. These exemptions require a case- by-case determination by an authorized decision maker and provide authority for suspending compliance requirements for actions at federal facilities, including military installations. To date, DOD has received or invoked exemptions under the Coastal Zone Management Act (CZMA), Endangered Species Act, Marine Mammal Protection Act, and RCRA. Although seldom made, DOD’s requests for exemption have been approved in every case. Table 1 presents the environmental statutes that authorize case-by-case exemptions and the approval standards. In 2002, DOD submitted to Congress an eight-provision legislative package, referred to as the Readiness and Range Preservation Initiative, proposing revisions to six environmental statutes on the basis of DOD’s concerns that restrictions in these statutes could limit realistic preparations for combat and negatively affect military readiness. DOD also requested two additional provisions that would allow DOD to cooperate more effectively with third parties on land transfers for conservation purposes. To date, Congress has enacted five of the Readiness and Range Preservation Initiative provisions. The fiscal year 2003 defense authorization act directed the Secretary of the Interior to prescribe regulations for issuing permits for the “incidental takings” of migratory birds during military training exercises authorized by the Secretary of Defense and provided an interim exemption from the Migratory Bird Treaty Act’s prohibition against taking, killing, or possessing any migratory birds except as permitted by regulation, until the implementation of new regulations. DOD had been concerned about the effects of a court decision holding that certain military readiness activities resulting in migratory bird takings violated the Migratory Bird Treaty Act. Interior department regulations published in February 2007 allow for the Armed Forces to take migratory birds incidental to military readiness activities, provided that for those activities the Armed Forces determine may result in a significant, adverse effect on a population of migratory bird species, they must confer with the FWS to develop and implement appropriate conservation measures to minimize or mitigate those effects. The Secretary of the Interior retains the power to withdraw or suspend the authority for incidental takings of migratory birds for particular activities under certain circumstances. Two additional provisions enacted in the fiscal year 2003 defense authorization act authorized the Secretary of a military department to enter into an agreement with a state or local government or any private organization committed to the conservation, restoration, or preservation of land and natural resources to address encroachment issues and to convey any surplus real property under the Secretary’s administrative control that is suitable and desirable for conservation purposes to any state or local government or nonprofit organization committed to conservation of natural resources on real property. The fiscal year 2004 defense authorization act enacted two of the five remaining Readiness and Range Preservation Initiative provisions by authorizing DOD exemptions from the Endangered Species Act and the Marine Mammal Protection Act. One of the revisions to the Endangered Species Act precluded the Secretary of the Interior from designating as critical habitat DOD lands that are subject to an approved integrated natural resources management plan, if the Secretary makes a written determination that such a plan provides a benefit to the species being designated. DOD, like other federal agencies, is still required to consult with the FWS and the National Marine Fisheries Service, as appropriate, to ensure that actions it performs, authorizes, funds, or permits are not likely to jeopardize the continued existence of a listed species or adversely modify its critical habitat. In DOD’s view, this statutory revision was needed to avoid the potential of any future critical habitat designations that could restrict the use of military lands for training. The other revision to the Endangered Species Act requires the Secretary of the Interior to consider effects on national security when deciding whether to designate critical habitat, but does not remove DOD from being subject to all other protections provided under the act. The revision to the Marine Mammal Protection Act authorized the Secretary of Defense to exempt for a specific period, not to exceed 2 years, any action or category of actions undertaken by DOD or its components from compliance with the act’s prohibition against illegal takings of marine mammals, if the Secretary determines it is necessary for national defense. The revision also amended the definition of “harassment” of marine mammals, as it applies to military readiness activity, to require evidence of harm or a higher threshold of potential harm, and required the Secretary of the Interior to consider the impact on the effectiveness of the military readiness activity in the issuance of permits for incidental takings. In DOD’s view these amendments were needed to prevent restrictions on the use of the Navy’s sonar systems. Similar to previous years since fiscal year 2003, DOD included in its proposed National Defense Authorization Act for Fiscal Year 2008 the three remaining Readiness and Range Preservation Initiative provisions which provide exemptions from certain requirements of the Clean Air Act, RCRA, and CERCLA. As with previous Congresses, the 110th Congress did not include these provisions in the version of the bill that went before both houses for final vote. Descriptions of the three remaining proposals follow: First, the proposed revision to the Clean Air Act would have deferred emissions generated by military readiness activities from conforming to applicable state clean air implementation plans for achieving federal air quality standards and allowed DOD up to 3 years to satisfy these requirements. To be in conformity, a federal action must not contribute to new violations of the standards for ambient air quality, increase the frequency or severity of existing violations, or delay timely attainment of standards in the area of concern. DOD proposed this revision to provide flexibility for transferring training operations to areas with poor air quality without restrictions on these operations due to generated emissions. In addition, the revision would have required EPA to approve a state plan even if emissions from military readiness activities would prevent a given area within the state from achieving clean air standards. Second, DOD’s proposed revision to RCRA would have amended the definition of “solid waste” to exclude munitions that are on an operational range incident to their normal use, thereby excluding such munitions from regulation under RCRA. RCRA governs, among other things, the management of hazardous wastes, including establishing standards for treatment, storage, and disposal facilities. Third, the proposed revision to CERCLA, under which entities responsible for releases of hazardous substances are liable for associated cleanup costs, would have similarly amended the definition of “release.” CERCLA defines release as any spilling, leaking, pumping, pouring, emitting, emptying, discharging, injecting, escaping, leaching, dumping, or disposing into the environment (including the abandonment or discarding of barrels, containers, and other closed receptacles containing any hazardous substance or pollutant or contaminant). DOD’s view is that the proposed revisions to RCRA and CERCLA would clarify existing regulations EPA finalized in its 1997 Military Munitions Rule, pursuant to which “used” or “fired” munitions on a range are considered solid waste, subject to disposal requirements, only when they are removed from their landing spot. DOD sought this revision to eliminate the possibility of legal challenges to the rule, which might have resulted in an active range being closed to require the removal of accumulating munitions and cleanup of related contamination, thus restricting training. To the extent that encroachment adversely affects training readiness, opportunities exist for the problems to be reported in departmental and military service readiness reports. DOD defines readiness as the ability of U.S. military forces to fight and meet the demands of the national military strategy. Readiness is the synthesis of two distinct but interrelated levels: unit readiness (the ability of each unit to provide capabilities required by the combatant commanders to execute their assigned missions) and joint readiness (the combatant commander’s ability to integrate and synchronize ready combat and support forces to execute his or her assigned missions). DOD has stated that the goal of any readiness reporting or assessment system is to reveal whether forces can perform their assigned missions. Historically, DOD has inferred this ability from the status of unit resources via the Global Status of Resources and Training System. This system is the primary means for units to report readiness against designed operational goals. The system’s database indicates, at selected points in time, the extent to which units possess the required resources and training to undertake their wartime missions. DOD found, however, that these input- based assessments do not yield direct information on whether a force can actually perform an assigned mission despite potential resource shortfalls. In the spring of 2002, DOD announced plans to create a new Defense Readiness Reporting System that would provide commanders with a comprehensive assessment of the ability of capable entities to conduct operations without the command having to research and examine numerous databases throughout DOD, such as the Global Status of Resources and Training System and the service-specific readiness reporting systems. According to DOD, this new system is expected to be able to seamlessly integrate readiness data with planning and execution tools, providing a powerful means for rapidly assessing, planning, and executing operations. This system expands the readiness reporting process from simple resource-based reporting to the use of near real-time readiness information and dynamic analysis tools to determine the capability of an organization to execute tasks and missions. Specifically, the system represents a shift from (1) resources to capabilities—inputs to outputs; (2) deficiencies to their implications; (3) units to the combined forces; and (4) frontline units to all units contributing to front line operations. This report is a continuation of a series of reports that we have issued on matters related to training constraints as a result of encroachment factors on DOD’s training ranges. The following summarizes key issues from these reports: In June 2002, we reported that DOD’s readiness reports did not indicate the extent to which environmental requirements restricted training activities, and that these reports indicated a high level of military readiness overall. We also noted individual instances of environmental requirements at some military installations and recommended that DOD’s readiness reporting system be improved to more accurately identify problems for training that might be attributed to the need to comply with statutory environmental requirements. We found that (1) despite the loss of some capabilities, service readiness data did not indicate the extent to which encroachment has significantly affected reported training readiness; (2) though encroachment workarounds may affect costs, the services had not documented the overall impact of encroachment on training costs; and (3) the services faced difficulties in fully assessing the impact of training ranges on readiness because they had not fully defined their training range requirements and lacked information on the training resources available to support those requirements. In April 2003, we testified that environmental requirements were only one of several factors that affected DOD’s ability to carry out training activities, but that DOD was still unable to broadly measure the effects of encroachment on readiness. We found that (1) encroachment affected some training range capabilities, required workarounds, and sometimes limited training, at all stateside installations and major commands that we visited; (2) service readiness data in 2002 did not show the impact of encroachment on training readiness or costs, and though individual services were making some assessment of training requirements and limitations imposed by encroachment, comprehensive assessments had yet to be done; and (3) although some services reported higher costs because of encroachment-related workarounds for training, service data systems did not capture the costs comprehensively. We recommended a more comprehensive plan that clearly identified steps to be taken, goals and milestones to track progress, and required funding. In June 2005, we found that DOD continued to face various difficulties in carrying out realistic training at its ranges. We reported that deteriorating conditions and a lack of modernization adversely affected training activities and jeopardized the safety of military personnel. We observed various degraded conditions at each training range visited, such as malfunctioning communication systems, impassable tank trails, overgrown areas, and outdated training areas and targets. DOD’s limited progress in improving training range conditions was partially attributable to a lack of a comprehensive approach. We found that (1) while the services had individually taken a varying number of key management improvement actions, such as developing range sustainment policies, these actions lacked consistency across DOD or focused primarily on encroachment without including commensurate efforts on other issues, such as maintenance and modernization; (2) though the services could not precisely identify the funding required and used for their ranges, range requirements had historically been inadequately funded; and (3) although DOD policy, reports, and plans had either recommended or required specific actions, DOD had not fully implemented these actions. The requirement to comply with environmental laws has affected some training activities and how they are conducted, but our review of DOD’s readiness data does not confirm that compliance with these laws hampers overall military readiness. During our visits to training ranges, we found some instances where training activities were cancelled, postponed, or modified in order to address environmental requirements. However, DOD officials responsible for planning and facilitating training events may implement adjustments to training events, referred to as “workarounds,” to ensure training requirements are still accomplished. Our discussions with officials responsible for readiness data and our review of these data did not confirm that military readiness has been hindered because of restrictions imposed by environmental laws. OSD and each of the military services are currently in the process of developing systems that will provide DOD leadership and outside stakeholders a better understanding of how external factors, such as environmental laws, affect the department’s training and readiness. Compliance with various environmental laws has created restrictions on how DOD manages, plans, and conducts training exercises on its installations. Military training areas are subject to environmental laws which are intended to help the survival and preservation of the natural resources located on these training lands. Many of these training areas are home to endangered species; thus, areas that could be used for training or had been used for training on DOD installations are restricted and blocked off to prevent units from disturbing or harming the habitat of the endangered species, as the following examples illustrate. Marine Corps Base Camp Pendleton, California Because of competing land use and various environmental restrictions, officials at the base have reported that Marine combat units can use only about 6 percent (less than 1 mile) of its 17 miles of sandy beaches along the coast of the Pacific Ocean for major amphibious landing training exercises. Two of the environmental restrictions cited were for the threatened San Diego fairy shrimp, the endangered Coastal California gnatcatcher and its habitat. Another restriction involved the nesting season for the endangered bird called the California least tern (see fig. 1). Camp Pendleton officials said closing one beach during the nesting season introduces some artificiality into its training events because commanders would be limited in the number of landing areas available to them during offensive operational exercises. Barry M. Goldwater Range, Luke Air Force Base, Arizona Training officials stated that in calendar year 2004, about 8 percent (72 cases out of 878) of the F-16 training exercises were cancelled due to the presence of the endangered Sonoran pronghorn species present on the training range impact area. Aberdeen Proving Ground, Maryland Installation officials told us that on eight different occasions between April 2003 and June 2006, training exercises for the Naval Special Warfare Combatant Command were cancelled unexpectedly, due to the presence of new bald eagle nests in the training area and concerns that harm to the eagle population could have legal repercussions. In order to accomplish the required training requirements, the Navy official responsible for scheduling these exercises told us that the expeditionary force teams had to reschedule their training exercises for later dates or alternate locations, which were not as beneficial as the training area provided at Aberdeen Proving Ground. Naval Base Coronado, San Clemente Island, California Training officials told us that during the fire season the Navy is prohibited from firing illumination rounds on the shore bombardment area at San Clemente Island, which is used by the Navy for surface ship live-fire exercises. The exact dates for fire season vary from year to year, depending on the weather, but are generally for 8 months. According to Navy officials, some sailors do not receive this type of training until after they are deployed. Army National Training Center, Fort Irwin, California Installation officials said the presence of the threatened desert tortoise caused trainers and commanders to plan training activities around areas designated and blocked off for the protection of this protected species. Some military commanders believe that compliance with environmental laws protecting the natural resources may cause them to design training programs and scenarios that differ from what units would face once deployed for wartime operations. However, we found no evidence that combat units are unable to accomplish their training requirements despite the requirement to comply with various environmental laws. Furthermore, some officials we spoke with at these installations indicated that training areas available after protected zones had been established for these endangered species are sufficient to train units. Some OSD officials and other officials within DOD expressed the view that, although combat units can satisfy training requirements and may be deemed ready for combat deployments, compliance with environmental laws can significantly degrade the intended “realistic training” these units receive. According to those officials, when commanders and trainers are required to deviate from original training plans and procedures in order to comply with various environmental laws, combat units may not receive training experiences that mirror situations they might experience in a wartime scenario. These officials acknowledged the difficulty in measuring the impact environmental restrictions have on training, but they said constant deviation from realistic training scenarios has the potential to create an ill-prepared force and could possibly leave combat units vulnerable once deployed for combat missions. Despite having to comply with environmental restrictions, DOD is able to meet its readiness and training requirements through adjustments or modifications to training activities, known as workarounds. Usually trainers and planners know in advance the environmental restrictions they are faced with prior to a training event and plan accordingly to ensure required training tasks are completed. For example, at Camp Pendleton, California, officials said that to protect San Diego fairy shrimp habitat and archaeological cultural sites, Marines plant flags to represent foxholes instead of digging foxholes on the beach. Marine Corps officials said this workaround allows them to meet its training requirement, but limits their ability to conduct realistic training. Similarly, to accomplish training requirements and to protect aquatic and bank-side habitat for an endangered salmon species, officials at the Yakima Training Center, Washington, said vehicle traffic is limited to the use of bridges instead of allowing units to drive through creeks which would better approximate actual battlefield conditions. Officials acknowledged that complying with environmental laws can make it difficult at times to plan and conduct training events; however, these officials also acknowledged that military operations will always be subject to external restrictions whether units operate within the United States or abroad. For example, DOD officials said when units are deployed they may be restricted from damaging religious sites, such as churches or mosques, or may have to avoid dangerous operating areas like mine fields, so learning to deal with restrictions is standard operating procedure and the military has adapted to dealing with these requirements. In many cases, officials responsible for scheduling and facilitating training events incorporate environmental restrictions into planned training scenarios. For example, Fort Stewart, Fort Lewis, and Marine Corps Base Camp Pendleton officials said trainers instruct units to pretend restricted training areas are holy grounds, mine fields, or any other restricted area in theatre and advise them to avoid these areas. According to DOD officials, implementing these types of workarounds allows the department to accomplish its training requirements while ensuring natural resources are sustained and protected and offers an element of realism in terms of the need to avoid certain venues when units are actually deployed. Readiness data we reviewed for active duty combat units did not confirm that military readiness was hindered because of restrictions imposed by various environmental laws. In order to determine whether combat units are capable and ready to deploy for wartime missions, DOD and the military services use their unit readiness reporting systems to, among other things, report on whether a unit has received an adequate amount of training to perform its assigned mission prior to deployment. Two of the systems used to track unit readiness reporting are the Status of Resources and Training System, which is a DOD-wide readiness rating system, and the Army Readiness Management System. In the Status of Resources and Training System, if a unit is not adequately trained and is unable to perform its assigned mission, commanders record a less than satisfactory assessment score into the system and may include a brief summary in the “commanders comments” section within the system that explains why the unit is unable to perform its assigned mission. Our review of these reports for fiscal year 2006 and fiscal year 2007, including a review of the written commanders comments for Army, Navy, and Marine Corps active duty combat units, revealed that when units had not received an adequate amount of training, it was for a variety of reasons, such as not having enough assigned personnel or equipment. However, environmental restrictions did not appear as reasons why units were not adequately trained. Although we did not independently review readiness data for Air Force units due to data availability and time constraints, officials responsible for managing and maintaining these data told us that environmental restrictions generally did not appear as reasons why units were not adequately trained. DOD officials responsible for planning and facilitating DOD unit combat training at the installations we visited stated that a unit’s readiness is generally not affected by environmental restrictions imposed on the installations. According to some officials, environmental restrictions may in fact hinder a unit from receiving adequate training, but DOD’s readiness reporting system does not capture the ability of individual ranges to support training or the effects of endangered species and their habitat, wetlands, air quality, water quality, and other encroachment factors on range availability. According to one official responsible for managing data reported in the readiness system, there is no requirement to report environmental restrictions in the system, even though commanders have the option to do so. DOD officials said many commanders do not record environmental restrictions as a barrier to training because they use workarounds to ensure training tasks are accomplished, even if the environmental restriction caused them to alter or delay a training event. OSD and the services currently have efforts underway to develop systems to measure the effects encroachment factors, including environmental restrictions, have on an installation’s ability to meet its training mission. For example, the Office of the Secretary of Defense for Personnel and Readiness has begun to develop a new functionality within its Defense Readiness Reporting System that would provide DOD leadership and outside stakeholders, such as Congress, a better understanding of how external factors, such as environmental laws, affect training activities and readiness. Additionally, over the last few years, the services have spearheaded separate initiatives to track and report the encroachment factors that are affecting training on their installations. OSD officials said they will use these systems as data feeds into the new functionality within the Defense Readiness Reporting System. DOD is currently working to update and improve its Defense Readiness Reporting System that will assess constraints a military range faces when facilitating training for combat units. According to DOD officials we met with who are responsible for the development, update, and implementation of the Defense Readiness Reporting System, this system is expected to soon have the capability to identify the extent to which encroachment factors affect a range’s ability to support various operational capabilities, such as combat, combat support, and combat service support. Although this system is in early stages of development, DOD plans to pilot test this new functionality during calendar year 2008. According to DOD officials, there are still ongoing discussions with the services to solidify and agree on all the factors that will be measured. These officials told us they expect decisions to be finalized in the early part of fiscal year 2008, but at the time of this review OSD and the services had not come to a final agreement. Over the last few years, the Army has been working to introduce systems to report and track factors affecting training on its installations. The Army’s Installation Status Report (Natural Infrastructure) is a new decision-support tool used by Army leadership to assess the capability of an installation’s natural infrastructure to support mission requirements. In addition, the Army has developed an Encroachment Condition Module that quantitatively evaluates the impact of eight encroachment factors— threatened and endangered species, critical habitat, cultural resource sites, wetlands, air quality regulations, Federal Aviation Administration regulations, noise restrictions, and frequency spectrum—in order to assess measurable impact to training and testing at the installation and range level. Although the Army has made progress developing these systems, at the time of this review the Army was still in the process of field-testing these systems and thus had not finalized and released these systems throughout the Army. During discussions with multiple officials at the Army installations that we visited, concerns were expressed that some of the reports generated by the Installation Status Report (Natural Infrastructure) appear to exaggerate the factors affecting the installations’ ability to support training requirements. In addition, these officials were also concerned that the data generated from the Encroachment Condition Module do not reflect the actual environmental restrictions placed on the installations, which appear to significantly limit the installations’ ability to provide unit-level training. Some of these installation officials have also written memorandums expressing their concerns that the installation status report does not provide an accurate picture of the mission readiness of installations and suggested steps Army headquarters should take to ensure this system is more useful. On the basis of our review of summary data from the encroachment conditions module, we believe that discrepancies exist between the data on encroachment restrictions and the actual areas available for training at Fort Lewis, Washington, and Fort Stewart, Georgia. According to Army officials, at the time of our visits to these installations, the Army was in the process of working with installation officials to ensure that these data were accurate and current enough to enable decision makers to plan training events. The Navy has an effort underway to develop a web-oriented installation and range encroachment database that will assist it in identifying how encroachment factors affect unit training on its training ranges across the United States. For example, in August 2006 the Navy completed the initial development of a Navy-wide encroachment database to include encroachment issues identified by installations, ranges, and commands throughout the Navy. The Navy intends to finalize database development and link this information to its established repositories in order to begin generating reports for Congress. The Navy expects to have a user-friendly database available for use on its installations and ranges by June 2008. The Marine Corp’s Training and Range Encroachment Information System was developed as a part of an encroachment quantification study done at Marine Corps Base Camp Pendleton in 2003. This system is a tool intended to assess an installation’s ability to support required training, rather than assess the readiness of an individual Marine or Marine unit going through the training. According to Marine Corps officials, this system represents a prototype solution for collecting and quantifying encroachment effects that has the potential to be applied to other Marine Corps ranges and bases. However, according to these officials, this system has not been fielded and implemented across the Marine Corps because of questions about the amount of resources that would be required. As a result, Marine Corps officials have stated that more work needs to be done before this system will be released. In January 2008 the Air Force completed the development of its Natural Infrastructure Assessment Guide, which will provide Air Force leadership with a tool to manage the encroachment factors affecting its training ranges. This assessment tool will assist installation commanders in effectively managing their natural infrastructure, such as air space, through the identification of deficiencies and opportunities, correlated to affected operation, to enhance operational sustainability. This tool will also establish baseline information using a set of quantitative and qualitative measures that provide a comparison of needed resources to available resources, and will identify the incompatibilities and constraints on air, space, land, and water resources resulting from environmental encroachment pressures such as environmental restrictions. DOD has used the exemptions from the Marine Mammal Protection Act and Migratory Bird Treaty Act to continue to conduct training activities that might otherwise have been prohibited, delayed, or canceled, and the Endangered Species Act exemptions have enabled DOD to avoid potential training delays by providing it greater autonomy in managing its training lands. The Navy has twice invoked exemptions from the Marine Mammal Protection Act to continue using mid-frequency active sonar in its training exercises that would otherwise have been prevented. DOD’s exemption to the Migratory Bird Treaty Act eliminated the possibility of having to cancel military training exercises, such as Navy live-fire training exercises at the Farallon de Medinilla Target Range in the Pacific Ocean. The Endangered Species Act revisions provide that FWS consider the impact to national security when designating critical habitat on DOD lands and provide alternatives to critical habitat designation. Since 2006, the Navy has twice invoked its exemption from the Marine Mammal Protection Act to continue using mid-frequency active sonar technology in military training exercises, which would have otherwise been prevented by the law’s protection of marine mammals, such as whales and dolphins that may be affected by the technology. In both cases, DOD granted the exemption after conferring with the Secretary of Commerce, upon a determination that the use of mid-frequency active sonar was necessary for national defense. Mid-frequency active sonar is used by the Navy to detect hostile diesel- powered submarines used by the nation’s adversaries. According to Navy officials, the use of mid-frequency active sonar is a vital component of its underwater submarine warfare training program. Without these exemptions the Navy would have been prevented from using sonar technology during its training exercises, potentially causing a readiness issue within the Navy. For example, during the 2006 multinational Rim of the Pacific training exercise, which was conducted near the Hawaiian Islands, the Navy was prohibited from using mid-frequency active sonar for 3 days because of an injunction imposed concerning the effects the sonar could have on the marine mammals. In June 2006, DOD granted the Navy a six-month exemption from the Marine Mammal Protection Act for all military readiness activities that use mid-frequency active sonar during major training exercises or within established DOD maritime ranges or operating areas. In January 2007, DOD granted a two-year exemption for these same activities. However, during both exemption periods, DOD was and is required to employ mitigation measures developed with and supported by the National Marine Fisheries Service. According to DOD officials, the two-year period provides the Navy the time needed to develop its environmental impact statements for ranges where mid- frequency sonar is used. Although DOD granted the Navy an exemption to the Marine Mammal Protection Act to continue its training exercises, Navy officials told us that the primary reason it would have been prevented from using sonar technology was because it had not prepared an environmental impact statement for its training locations that use mid-frequency active sonar during training exercises. Under the National Environmental Policy Act of 1969 (NEPA), agencies evaluate the likely environmental effects of projects they are proposing using an environmental assessment or, if the projects likely would significantly affect the environment, a more detailed environmental impact statement. In addition, the Marine Mammal Protection Act requires consultation between DOD and the National Marine Fisheries Service to determine the impact on marine mammals when conducting military readiness activities. According to NRDC, an NGO that filed suit against the Navy to prevent it from using its sonar technology, the Navy failed to prepare an environmental impact statement and proper mitigation strategies in advance of using its sonar technology. NRDC is concerned that the use of mid-frequency active sonar has had a detrimental effect on marine mammals in the nation’s oceans and waterways. Thus, it is the NRDC’s view that until the Navy prepares the required environmental documentation and implements appropriate mitigation measures, these sonar activities should be stopped. The Navy has prepared notices of intent to prepare environmental impact statements for 12 ranges and operational areas. According to Navy officials, all 12 environmental impact statements will be completed, and the Navy is expected to be in compliance with the Marine Mammal Protection Act by the end of calendar year 2009. DOD’s exemption to the Migratory Bird Treaty Act authorizing the incidental taking of migratory birds during military readiness activities eliminated the possibility of having to delay or cancel military training exercises. In response to litigation in 2000 and 2002, DOD became concerned that environmental advocates could initiate further litigation against the department, causing delays or cancellation of future training activities. For example, in March 2002, in response to a lawsuit brought by the Center for Biological Diversity, a federal district court ruled that Navy training exercises at the Farallon de Medinilla Target Range within the Mariana Islands in the Pacific Ocean, which resulted in the incidental taking of migratory birds, violated the Migratory Bird Treaty Act. The 2003 enactment of DOD’s exemption changed the Migratory Bird Treaty Act to allow DOD to conduct military readiness exercises that may result in incidental takings of migratory birds without violating the act. DOD officials we spoke to told us that the exemption has not affected how training activities are conducted; rather, it codified and clarified how the act would be applied to military training missions, and it enabled DOD to avoid potential legal action that could have significantly affected training and readiness exercises at Farallon de Medinilla and other DOD installations. According to officials we met with during our visits to other installations with migratory bird populations, training activities at those locations generally do not affect migratory birds. The Endangered Species Act exemption has enabled DOD to avoid potential training delays by providing it greater autonomy in managing its training lands. The exemption, enacted in the fiscal year 2004 defense authorization act, provides DOD two means of avoiding critical habitat for threatened or endangered species designated on its lands by the FWS. One method of avoiding critical habitat designation for the endangered or threatened species found on its land is through the use of an approved integrated natural resources management plan, which the FWS or the National Marines Fisheries Service agrees provides a benefit to the species. According to DOD officials, these management plans provide it with the flexibility needed to perform readiness activities while simultaneously protecting the natural resources located on its installations. Secondly, in a case where critical habitat designation is proposed on a military installation, DOD can request the Secretary of the Interior take into consideration whether national security concerns outweigh the benefits of the designation. Although FWS officials stated that these exemptions codified their practice of generally not designating critical habitat on military lands when the lands were managed under appropriate conservation plan, DOD officials believed the department needed these them to avoid future designations that could restrict its training lands and cause potential delays in training while the required administrative consultations with FWS are completed. According to DOD officials, not having critical habitat designated for endangered or threatened species found on military lands gives DOD more flexibility and greater autonomy over the management of its lands used for its training activities. However, according to FWS officials, critical habitat designations would only require an additional level of consultation, which would have had very minimal, if any, effect on DOD’s ability to use its lands for training purposes. DOD officials said that the increased level of consultation required between the department and outside stakeholders, such as the FWS, would take away the time and resources required to plan and execute its training activities. Furthermore, according to DOD officials, growth in endangered species populations on some installations has increased the challenges they face in completing their required training activities while simultaneously protecting the species and their habitats. In addition, some range managers and trainers at installations we visited said that they believe that designating critical habitat on military lands could require them to avoid using critical habitat areas, which would take away potentially valuable training areas. However, now that DOD has the authority to use its approved integrated natural resources management plans, which are ultimately approved by the FWS, in lieu of critical habitat designation, trainers and range managers feel less restricted from using their training ranges. On the basis of meetings with officials within and outside DOD and visits to 17 training ranges, we found no instances where DOD’s use of exemptions from the Endangered Species Act or Migratory Bird Treaty Act has adversely affected the environment; however, the impact of the Marine Mammal Protection Act exemption has not yet been determined. We found no instances where DOD’s use of the Endangered Species Act exemption has negatively affected populations of endangered or threatened species. Moreover, the services employ a variety of measures and conservation activities to mitigate the effects of training activities on endangered species, some of which have helped to increase the populations of certain endangered species. However, NGO officials we spoke with were concerned that DOD’s use of its integrated natural resources management plans in lieu of critical habitat designations may weaken oversight of endangered species found on military lands. Similarly, we found no instances where DOD’s use of the Migratory Bird Treaty Act exemption has significantly affected the populations of migratory birds. However, the overall effect of the Navy’s use of mid-frequency active sonar on marine mammals protected under the Marine Mammal Protection Act is unclear and is still being studied. DOD, federal regulatory agency, and NGO officials, and officials at the military training ranges we visited said that there were no instances where DOD’s use of the Endangered Species Act exemptions have adversely affected the populations of endangered or threatened species. Moreover, the services employ a variety of measures and conservation activities to mitigate the effects of their training activities on endangered species populations on their lands. We also found instances where DOD environmental stewardship of its natural resources have achieved some positive results with regard to increases in the population of certain endangered species. In addition, FWS officials told us that DOD has taken positive steps to manage and preserve its natural resources and provided several examples of DOD’s proactive steps to manage threatened or candidate species. The services have taken steps on their installations to minimize the effects of their training activities on their endangered species populations, as the following examples illustrate. At Camp Lejeune, North Carolina, nests for the threatened green sea turtle and Atlantic loggerhead turtle are relocated away from training beaches by Camp Lejeune environmental management personnel. At Yakima Training Center, Washington, endangered fish species are protected by the installation declaring aquatic and riparian habitat off limits to all but foot traffic except at hardened crossings, such as bridges. At the Barry M. Goldwater Range, Arizona, range officials employ spotters to ensure that resident endangered Sonoran pronghorn are not present in munitions impact areas prior to exercises. DOD’s management of its natural resources has achieved some positive results with increases in the population of certain endangered species. At five of the installations we visited, we were provided data that showed an increase in the populations of three endangered species, as the following examples illustrate. Red-Cockaded Woodpecker Since the mid-1990s, the red-cockaded woodpecker populations at Fort Stewart, Georgia, and Eglin Air Force Base, Florida, have increased. In addition, Fort Stewart has served as a source of red-cockaded woodpeckers for repopulation efforts on nonmilitary lands. Figure 2 shows trend data and projected increases in red-cockaded woodpecker potential breeding groups from calendar year 1994 through calendar year 2016 for Fort Stewart and Eglin Air Force Base. On the basis of the data, Fort Stewart and Eglin Air Force Base are both projected to meet their recovery goals of 350 potential breeding groups by 2011. Loggerhead Shrike At Naval Base Coronado, San Clemente Island, California, the Navy, in partnership with FWS and the San Diego Zoo, has developed a captive breeding program that has increased the population of the Loggerhead Shrike, an endangered bird species, on San Clemente Island. This endangered bird population has increased from approximately 18 in 2000 to more than 88 in 2007 due partly to this conservation measure. According to the environmental planner for San Clemente Island, approximately 60 birds are retained for breeding purposes, while all other birds are released once it is determined that they can survive in the wild. Figure 3 shows a Loggerhead Shrike captive breeding facility. Sonoran Pronghorn According to data from the Arizona Game and Fish Department provided to us by Air Force officials, there were 68 Sonoran pronghorn, an endangered species, on the Barry M. Goldwater Range as of December 2006, up from an estimated 58 pronghorn in 2004. Air Force officials also provided us with information on pronghorn recovery efforts, which include a semicaptive breeding program located at the Cabeza Prieta National Wildlife Refuge. Air Force officials told us that semicaptive breeding is an important component of their recovery effort. Officials said they plan to release up to 20 captivity-bred animals annually beginning in 2008. Air Force officials told us that the creation of artificial forage enhancement plots are a key component in enhancing pronghorn survivability during periods of drought. Additionally, these officials said they locate these plots away from target areas to minimize the impact of training activities on the pronghorn population. FWS officials told us that DOD has taken positive steps to manage and preserve its natural resources and has been proactive in the management of its threatened species and species being considered for protection under the Endangered Species Act, as the following examples illustrate. Fort Carson, Colorado, provided a dedicated area for the threatened Greenback Cutthroat Trout that affords eggs for restoration efforts, opportunities for research, and recreational fishing opportunities for soldiers. In addition, Fort Carson participated in and funded research on American peregrine falcons (a recovered species) and threatened Mexican spotted owls that seasonally use the installation. Fort Wainwright, Alaska, worked to identify areas where the installation lacked natural resource data (e.g. fish species abundance and diversity in streams and spawning areas), and with assistance from the FWS, then linked projects to achieve its goal of collecting the needed resource data. The U.S. Air Force Academy, Colorado, holds most of the remaining Arkansas River drainage population of the threatened Preble’s Meadow Jumping Mouse. The Academy is represented on the recovery team, has funded tasks identified in the recovery team draft plan, and has conducted and funded research on the monitoring of habitat and populations. Although the NGOs we spoke with varied in their opinions about the effectiveness of DOD’s use of integrated natural resources management plans in lieu of critical habitat designations, all of the officials we spoke with were concerned about the extent to which the FWS would be able to exercise its regulatory authority under the Endangered Species Act, thus weakening its oversight of the management, protection, and preservation of endangered species found on military lands. Furthermore, officials from these organizations expressed concerns that the exemption could safeguard DOD from potential litigation involving critical habitat designation and lessens the public’s ability to comment on how DOD plans to manage the endangered species located on its installations. DOD installation officials responsible for developing the department’s natural resources management plans acknowledged changes in the public comment process from the one traditionally used when a critical habitat designation is proposed. These officials also stated that they publicly announce the development or revision of these management plans, notify local conservation groups of the development or revision of the management plans to ensure their views are taken into consideration during the process, and take all public comments under consideration when finalizing the management plans. Officials from various NGOs had differing opinions on DOD’s use of its integrated natural resources management plans to protect and preserve endangered species on military land, and some were concerned that DOD’s use of these plans in lieu of critical habitat designation may weaken the oversight FWS has under the Endangered Species Act, as the following examples illustrate. Officials of the Public Employees for Environmental Responsibility (PEER)—a national nonprofit alliance of federal, state, and resource employees—and the Endangered Species Coalition—a nonpartisan organization focused on endangered species issues—were generally satisfied with DOD’s efforts to protect endangered species on its installations, and stated that DOD’s implementation of its integrated natural resources management plans appeared to be an effective tool for managing its natural resources. Officials of the Center for Biological Diversity—a nonprofit organization focusing on species and habitat conservation—questioned whether allowing DOD to take the lead on endangered species management on its own lands was the best strategy. One official from the Center for Biological Diversity stated that, unlike critical habitat designation, integrated natural resources management plans would only provide a limited benefit to endangered species and implementation of these plans vary by installation. Additionally, this official stated that the formal process of designating critical habitat provides more comprehensive protection and benefit to endangered species. Officials of NRDC stated that DOD’s management plans are not an adequate substitute for critical habitat designation because the quality of the plans varies, the successful implementation of the plan is largely dependent on an installation’s leadership, and there are no quantifiable, measurable goals that can be enforced. DOD officials told us that they view integrated natural resources management plans as a tool focused on the management of an ecosystem as opposed to a tool for managing individual species. In addition, according to DOD officials, these management plans are a more cost effective way to manage an installation’s natural resources and reduce the likelihood of a significant adverse impact on species. None of the NGO officials we interviewed could provide us with data to illustrate that DOD’s use of an integrated natural resources management plan has caused an endangered species population to decline or harmed their habitat. DOD, federal regulatory agencies, and NGO officials, and officials at the military training ranges we visited all said that there were no instances where DOD’s use of the Migratory Bird Treaty Act exemption has significantly affected the populations of migratory birds. Since February 2007, when FWS issued the final rule authorizing incidental takings of migratory birds during military readiness activities, officials from DOD nor FWS were not able to provide instances where a military training activity was assessed and determined to have a significant adverse effect on a migratory bird population. In addition, DOD employs various measures to mitigate the potential impact of its training activities on migratory bird populations. For example, Navy officials told us that an additional zone was established in which only inert munitions may be used, which is located directly below a no bomb zone at Farallon de Medinilla Target Range within the Mariana Islands, as an additional mitigation measure for the island’s migratory bird population. In addition, at Naval Air Station Fallon, Nevada, aircraft maintain a minimum altitude of 3,000 feet when flying above the Stillwater National Wildlife Refuge to avoid migratory bird populations. The effects of the Navy’s use of mid-frequency active sonar on marine mammals protected under the Marine Mammal Protection Act are unclear and are still being studied. The Navy, in conjunction with external researchers, is conducting studies in an attempt to determine the effects mid-frequency active sonar has on marine mammals. According to documents provided to us by Navy officials, differing interpretations of scientific studies on behavioral changes among marine mammal populations have complicated compliance with the Marine Mammal Protection Act. Thus, additional coordination between the Navy and the National Marine Fisheries Service is required to resolve the regulatory uncertainty as to the “biological significance” of the effects of mid- frequency active sonar on marine mammals. The Navy employs mitigation measures, such as establishing marine mammal lookouts, ensuring there are no marine mammals within a certain radius of ships using sonar, and reducing the power of the ships’ sonar systems to lessen the possible impact mid-frequency active sonar may have on the marine mammal populations. The Navy has also begun reporting stranded marine mammals to the National Marine Fisheries Service. National Marine Fisheries Service officials have characterized their working relationship with the Navy as collaborative and constructive in that they have the opportunity to review and comment on the effectiveness of the Navy’s mitigation measures, such as the adequacy of the training that marine mammal lookouts receive. These measures are in effect during the 2-year period beginning in January 2007 in which mid-frequency active sonar activities are exempt from the Marine Mammal Protection Act. In its February 2008 report to Congress, the Navy stated that in 2007 it had completed 12 major training exercises employing mid-frequency active sonar and found no marine animals within the range of injury (10 meters) of any transmitting vessel during these exercises. The Navy requires that units participating in these major exercises report the number of marine mammals sighted while these exercises are conducted. If a marine mammal is sighted, participating ships, submarines, and aircraft are required to report the date, time, distance from unit, and action taken by the unit, if any. On the basis of the results of the after-action reports for these exercises, the Navy concluded that the various training activities did not kill or injure any marine mammals. Although the Navy acknowledges that it is not possible to account for the mammals that were not observed, it also noted that the low number of marine mammal sightings qualitatively indicates that the likelihood of an effect on the population level of any marine mammal species is further reduced. However, NGO officials have told us they believe that the Navy’s mitigation measures are insufficient, and they do not believe that the Navy has adequately quantified the impact of prohibitions on sonar on its ability to train. Additionally, according to NRDC representatives, a report completed in 2004 by a scientific committee of leading whale biologists established by the International Whaling Commission, has convincing and overwhelming results linking mid-frequency active sonar with the deaths of beaked whales. These officials are also uncertain whether the Navy would be in compliance with the Marine Mammal Protection Act when the exemption expires in January 2009. Further, these NGO representatives acknowledged that the nature of certain marine mammal populations creates difficulties in establishing a scientific basis for the effects of mid- frequency active sonar on marine mammals. DOD acknowledges that, under certain circumstances and conditions, exposure to mid-frequency active sonar may have an effect upon certain species, but the causal connection between whale strandings and exposure to mid-frequency active sonar is not known. DOD has not presented a sound business case demonstrating a need for the proposed exemptions from the Clean Air Act, RCRA, and CERCLA to help achieve its training and readiness requirements. DOD has outlined some anticipated benefits of the proposed exemptions and has provided Congress with a description of the features and scope of its Readiness and Range Preservation Initiative, but the department has not made a sound business case testing these assertions or provided any specific instances in which the movement of forces or equipment, training on an operational range, or its use of munitions on an operational range has been hindered by the requirements of the Clean Air Act, RCRA, or CERCLA, respectively. Therefore, Congress lacks a sound basis for assessing the need to enact the three remaining proposed exemptions. DOD has not presented a sound business case demonstrating a need for the remaining three exemptions proposed in its Readiness and Range Preservation Initiative. In order to advise decision makers on a proposed project, policy or program, best practices and our prior work recommend that agencies develop a business case whereby they can assess and demonstrate the viability of proposed initiatives. A business case is a substantiated argument that includes, among other things, the problem or situation addressed by the proposal, the features and scope of the proposed initiative, the anticipated outcomes and benefits, the options considered and the rationale for choosing the solution proposed, the expected costs, and the expected risks associated with the proposal’s implementation. DOD presented the features and scope of the three remaining Readiness and Range Preservation Initiative provisions in proposed language for the fiscal year 2008 defense authorization bill. DOD officials also outlined some possible benefits of the proposed exemptions. For example, in its 2006 annual sustainable ranges report, DOD stated that without these additional exemptions the department was vulnerable to legal challenges that could threaten its ability to use operational ranges for readiness training and testing. DOD officials also stated that some possible benefits of the proposed exemptions include facilitating (1) the movement of forces and equipment, (2) training on an operational range, and (3) the use of munitions on an operational range. However, DOD has not provided any of the other elements of a sound business case. According to DOD officials, the proposed exemption from requirements of the Clean Air Act would provide the department flexibility in replacing or realigning forces and equipment in nonattainment areas, which do not meet certain EPA air quality standards, but they have not provided evidence to support the need for the exemption. Moreover, DOD could not cite any case where Clean Air Act requirements prohibited the movement of troops or equipment into nonattainment areas. OSD’s Office of General Counsel officials told us that the Clean Air Act provision grew out of the 1995 base realignment and closure round, when the movement of aircraft into these areas became a problem. For the 2005 base closure round, OSD asked the services if moving activities into nonattainment areas would be an issue, and the answer was that it would not be. In its 2006 report on sustainable ranges, DOD stated that, while the Clean Air Act’s general conformity requirement had the potential to threaten the deployment of new weapon systems, the requirement had not yet prevented any military readiness activities. Officials of state and local agencies, and NGOs, such as the Center for Public Environmental Oversight (CPEO), NRDC, and PEER, have expressed concern that the proposed exemptions could increase air pollution and potentially result in greater contamination, higher cleanup costs, and a threat to human health. Opponents of DOD’s proposed exemptions from the Clean Air Act include state and local air pollution control program officials, state environmental commissioners, state attorneys general, county and municipal governments, and environmental advocates. They contended that granting the exemption could increase air pollution, posing a threat to human health. Opponents also claimed that the proposed exemption is unnecessary as the Clean Air Act already contains a provision that would allow DOD to request a case-by-case exemption if necessary, which DOD has never invoked. In addition, an EPA official we spoke with expressed similar concerns about the proposed Clean Air Act exemption. He also stated that because DOD has an extensive planning process, and readiness activities are generally planned ahead, DOD should have time to mitigate the emissions, or work with the states to establish a budget within the states’ implementation plans so that an exemption to the Clean Air Act would not be needed. According to DOD’s 2006 sustainable ranges report, existing ambiguity over whether the RCRA definition of “solid waste” is applicable to military munitions located on operational ranges had generated litigation by private plaintiffs seeking to curtail or terminate munitions-related training at operational ranges. The report also asserted that future litigation of this nature, if successful, could force remediation at operational ranges, effectively precluding live-fire training. However, DOD was not able to provide any examples of where a private citizen’s RCRA lawsuit had affected training on an operational range. Although live-fire training restrictions have been imposed at the Eagle River Flats Impact Area at Fort Richardson, Alaska, the restrictions were not the result of any litigation. The Army imposed the firing restrictions in 1991 following completion of an environmental assessment that established a link between firing munitions containing white phosphorus and waterfowl mortality at Eagle River Flats. We discussed DOD’s concerns about RCRA and the definition of “solid waste” with officials of EPA’s Office of Federal Facilities Enforcement and Office of Federal Facilities and Restoration. These officials told us that, to address DOD’s concerns, EPA developed the 1997 Military Munitions Rule, which states that military munitions are not considered to be solid waste when they are used for their intended purpose on an operational range. The EPA officials also said that to date they have never required DOD to clean up an operational range, unless contamination is migrating off the range, which could occur through polluted groundwater. With regard to the proposed exemption from RCRA, opponents have included state attorneys general and NGOs such as CPEO, NRDC, and PEER. They have asserted that granting DOD the exemptions could weaken federal and state oversight. Specifically, in written comments to the Office of Management and Budget on DOD’s 2004 legislative proposals for the National Defense Authorization Act, EPA stated that it was concerned that the exemptions would result in states’ oversight agencies having to wait for human health and environmental effects to occur beyond the boundaries of the operational range before taking action. This delay could increase the costs and time to respond. Other organizations expressed similar concerns about the exemptions preempting federal or state authority. The opponents also noted that the exemptions were not needed, as RCRA contains national security provisions allowing the President to exempt DOD facilities from any statutory or regulatory authority on a case-by-case basis. However, DOD has not invoked this case-by-case exemption for training or readiness-related activities. DOD officials said the department is concerned that the firing of munitions on operational ranges could be considered a “release” under CERCLA, which could then trigger CERCLA requirements that would require removal or remedial actions on operational ranges. However, DOD officials could not provide any examples of when this had actually occurred. On the contrary, DOD officials told us that EPA and the states generally do not seek to regulate the use of munitions on operational ranges under RCRA or CERCLA. Cognizant EPA officials also told us that EPA generally did not impose regulatory requirements on operational ranges. Further, EPA, in written comments to the Office of Management and Budget on DOD’s 2004 legislative proposals for the National Defense Authorization Act, stated that it had been judicious in the use of the various authorities it has over operational ranges. Opponents from states and NGOs such as CPEO, NRDC, and PEER, have similar concerns with DOD’s proposed exemption from CERCLA as they do with the RCRA exemption discussed previously. They contend that granting DOD the exemptions could weaken federal and state oversight and may delay any remediation action. They also note that the proposed exemption is not needed, as CERCLA contains a case-by-case exemption, which has not been invoked by DOD. In addition, similar concerns were expressed by EPA in its written comments to the Office of Management and Budget on DOD’s 2004 legislative proposals for the National Defense Authorization Act. Because DOD has not provided any specific examples to support assertions that its training activities have been hindered by the requirements of the Clean Air Act, RCRA, or CERCLA, Congress lacks a sound basis for assessing the need to enact these three remaining exemptions. Also, DOD has not demonstrated that it considered any other options that could provide the benefits it desires. Nor has the department provided any data related to the expected costs and risks—financial, environmental, or otherwise—of the proposed exemptions. Similarly, DOD has not demonstrated the cost of any workarounds necessitated by the need to comply with the Clean Air Act, RCRA, or CERCLA, and it has thus far not been able to show any risks to military readiness or national security if the exemptions are not granted. Until DOD develops a substantiated argument in support of its proposed exemptions from the Clean Air Act, RCRA, and CERCLA, it will have little on which to base these requests. DOD’s commitment to being a good neighbor to the communities where many servicemembers and their families live, the desire to avoid litigation, and the need to maintain its training areas in good condition provide DOD with incentives to be a good environmental steward. In addition, there is little evidence to suggest that the exemptions to environmental laws that DOD has already been granted have had adverse consequences for animal species or their habitat on military installations. Nevertheless, there is also little evidence to support the position that providing DOD additional environmental exemptions, such as those that have been proposed from provisions of the Clean Air Act, RCRA, and CERCLA, would benefit DOD training activities or improve military readiness. Without a sound business case that demonstrates the benefits and adverse effects on training and readiness, costs, and risk associated with the proposed exemptions, DOD will have little on which to base any further requests, and Congress will have difficulty determining whether additional exemptions from environmental laws are warranted. Should DOD plan to pursue exemptions from the Clean Air Act, RCRA, CERCLA, or other environmental laws in the future, we recommend that the Secretary of Defense direct the Deputy Under Secretary of Defense for Installations and Environment and the Deputy Under Secretary of Defense for Readiness to jointly develop a sound business case that includes detailed qualitative and quantitative analyses assessing the associated benefits, costs, and risks of the proposed exemptions from environmental laws. In written comments on a draft of this report, the Principal Deputy within the Office of the Under Secretary of Defense for Personnel and Readiness partially concurred with our recommendation, agreeing that a sound business case with good qualitative and quantitative analysis should be developed in association with future environmental provisions. However, DOD believes that past provisions involving clarifications to environmental laws were largely supported with the rationale and supporting information necessary to constitute a sound business case and does not accept the premise that the readiness and training imperatives or associated risks were not conveyed to the extent feasible for the Clean Air Act, RCRA, and CERCLA provisions. As our report clearly stated, DOD has not provided any specific examples to support its assertions that its training activities have been hindered by the requirements of the Clean Air Act, RCRA, or CERCLA. Also, DOD has not demonstrated that it considered any other options that could provide the benefits it desires. Nor has the department provided any data related to the expected costs and risks—financial, environmental, or otherwise—of the proposed exemptions. Our report does not discuss the rationale and information used to support past provisions. We continue to believe that DOD has not provided adequate support for its assertion that its training activities have been hindered by the requirements of the Clean Air Act, RCRA, and CERCLA. We stand by our recommendation that DOD needs to present a sound business case, including associated benefits, costs, and risks should it pursue future exemptions from these or other environmental laws. DOD strongly disagreed with our use of the term “exemptions” as applied to its Readiness and Range Preservation Initiative, which it believes unnecessarily reinforces the perception that DOD has sought to avoid its environmental stewardship responsibilities. First, the term “exemption” is not defined in the body of environmental law relevant to this report. Our intent is to use a single term throughout the report for consistency and readability, although we recognize that each of the Readiness and Range Preservation Initiative provisions affect change by various means in various environmental laws. We describe each of those provisions on pages 2 and 3, pages 13 through 17, and in footnotes 6 through 12. Second, our report acknowledges that DOD’s environmental stewardship of its natural resources has achieved positive results and that it has been proactive in its management of endangered and threatened species. DOD’s comments are reprinted in appendix II. DOD also provided technical comments, which we have incorporated into the report as appropriate. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretaries of Defense, Commerce, and the Interior; the Administrator of the Environmental Protection Agency; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. Copies will be made available to others upon request. In addition, this report will be available at no charge on our Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-4523 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine the effects, if any, of environmental laws and the Department of Defense’s (DOD) use of exemptions to the Migratory Bird Treaty Act, the Marine Mammal Protection Act, and the Endangered Species Acton training activities and military readiness, we judgmentally selected and visited 17 military training locations throughout the continental United States, which included training sites from each military service component, to directly observe the effects of environmental laws and DOD’s use of exemptions on training activities, military readiness, and the environment. These locations included Aberdeen Proving Ground, Maryland; Fort Lewis, Washington; Fort Stewart, Georgia; Naval Station Norfolk, Naval Air Station Oceana, and Dam Neck Annex, Virginia; Naval Air Station Fallon, Nevada; Fort Irwin, Naval Base Coronado, Naval Air Station North Island, Naval Auxiliary Landing Field San Clemente Island, and Marine Corps Base Camp Pendleton, California; Marine Corps Base Camp Lejeune, North Carolina; Avon Park Air Force Range and Eglin Air Force Base, Florida; and Luke Air Force Base and Barry M. Goldwater Range, Arizona. These installations were identified and selected based on our previous work involving some installations experiencing encroachment and sustainable training range issues. DOD concurred that the installations we selected continue to have problems in this area and stated that these locations would provide an important perspective of some of the challenges DOD faces to comply with environmental laws. Because the installations were judgmentally selected, the specific challenges faced at these selected locations can not be generalized across all of DOD. We obtained documents and reports describing the effects of environmental laws and exemptions on training and readiness and the need for workarounds to meet training requirements from DOD officials responsible for managing military training. We compared and contrasted data on training requirements with actual training activities to identify examples—in terms of the number of training days, types of training activities, unit readiness ratings, and costs—where training was affected by environmental requirements and DOD’s use of environmental exemptions. We also met with service officials responsible for managing readiness data for each service. These officials provided us with unit readiness data for fiscal years 2006 and 2007, which included some commander comment summaries describing, when applicable, why a unit had not met its unit training requirements. Our review of these data allowed us to assess whether environmental restrictions imposed on DOD installations had an impact on unit readiness. Furthermore, we conducted literature searches, and reviewed studies completed by other audit agencies and research companies such as the Congressional Research Service, the Center for Naval Analysis, and the RAND Corporation, to review previous findings and conclusions of how environmental laws may have affected military training and readiness. In addition, we met with officials responsible for planning, managing, and executing unit training to gain an understanding of how these officials assisted military units to meet training requirements while addressing environmental laws. We also met with officials from the Office of the Secretary of Defense (OSD) and headquarters officials from each of the military services to obtain their perspectives on the effects of environmental laws and the use of environmental exemptions on military training activities and readiness. To determine the effects, if any, of DOD’s use of exemptions from the Migratory Bird Treaty Act, the Marine Mammal Protection Act, and the Endangered Species Act on the environment, we visited the 17 installations mentioned, reviewed related reports and studies, and examined some installations’ integrated natural resources management plans to determine how natural resources, such as migratory birds, marine mammals, and endangered species and their habitats are protected on DOD lands during military training exercises. We also met with officials from other federal regulatory agencies, such as the U.S. Fish and Wildlife Service, the National Marine Fisheries Service, and the U.S. Environmental Protection Agency (EPA), to determine how these regulatory agencies were overseeing and managing natural resource conservation activities conducted on military training areas and to obtain their perspective of how well DOD is doing in protecting its natural resources. We also met with officials from OSD and service offices, such as officials from the Office of the Under Secretary of Defense for Personnel and Readiness; the Office of the Deputy Under Secretary of Defense for Installations and Environment; the Office of the General Counsel for Environment and Installations, OSD; the Deputy Assistant Secretary of the Army for Environment, Safety and Occupational Health; the Office of the Assistant Secretary of the Navy for Installations and Environment; the Operational Environmental Readiness and Planning Branch and the Training Ranges and Fleet Readiness Branch, Chief of Naval Operations; the Environmental Management Program Office, Headquarters U.S. Marine Corps; the Deputy Assistant Secretary of the Air Force for Environment, Safety, and Occupational Health; the Air Force Center for Engineering and the Environment; and the Ranges and Air Space Division, Headquarters U.S. Air Force. During these meetings, we discussed the statutory environmental requirements DOD must follow when conducting military training activities at its installations and training areas. To obtain a balanced perspective on the progress DOD has achieved in managing natural resources on its lands, we met with officials from nongovernmental organizations, such as the Natural Resources Defense Council (NRDC), Public Employees for Environmental Responsibility (PEER), the Center for Biological Diversity, the Center for Public Environmental Oversight, the Endangered Species Coalition, and the RAND Corporation. These officials provided us with their perspective on how well DOD has done in protecting the natural resources, such as endangered species and their habitat located on DOD lands, migratory birds, and marine mammals. To assess the extent to which DOD has demonstrated that proposed statutory exemptions from the Clean Air Act; Resource Conservation and Recovery Act; and the Comprehensive Environmental Response, Compensation, and Liability Act would help the department to achieve its training and readiness goals, we reviewed the department’s most recent annual sustainable range reports, its Readiness and Range Preservation Initiative, and other documents for elements of a sound business case. In addition, we reviewed documents that provided the perspective of federal and state regulatory agencies, such as EPA, state and local air pollution control program officials, state environmental commissioners, state attorneys general, county and municipal governments, and nongovernmental organizations, such as the Center for Public Environmental Oversight, NRDC, and PEER, on the potential impact to the environment if these exemptions were granted. We also discussed the topic with officials from OSD, the military services, and EPA. During these meetings, we discussed the potential benefits and problems associated with the proposed statutory exemptions. During our visits to the military installations identified previously, we also obtained military service officials’ perspectives on the potential effects of using the proposed statutory exemptions on training activities, military readiness, and the environment. Additionally, we compared the elements of a sound business case and what DOD provided to Congress to assess whether DOD had demonstrated a need for the three remaining exemptions. On the basis of information obtained from the military services on the reliability of their unit readiness data, our discussions with DOD, military service, and NGO officials, and our review and analysis of documents and reports describing the effects of environmental requirements and statutory exemptions on training activities, military readiness, and the environment, we believe that the data used in this report are sufficiently reliable for our purposes. The time periods encompassed by the data used in this report vary for each of our objectives depending on the date ranges for which each type of data was available. We conducted this performance audit from June 2007 through March 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Mark A. Little, Assistant Director; Vijaykumar Barnabas; Susan Ditto; Jason Jackson; Arthur James; Richard Johnson; Oscar Mardis; Patricia McClure; Jacqueline Snead McColl; Anthony Paras; Charles Perdue; and Karen Thornton made major contributions to this report. | A fundamental principle of military readiness is that the military must train as it intends to fight, and military training ranges allow the Department of Defense (DOD) to accomplish this goal. According to DOD officials, heightened focus on the application of environmental statutes has affected the use of its training areas. Since 2003, DOD has obtained exemptions from three environmental laws and has sought exemptions from three others. This report discusses the impact, if any, of (1) environmental laws on DOD's training activities and military readiness, (2) DOD's use of statutory exemptions from environmental laws on training activities, (3) DOD's use of statutory exemptions on the environment, and (4) the extent to which DOD has demonstrated the need for additional exemptions. To address these objectives, GAO visited 17 training locations; analyzed environmental impact and readiness reports; and met with officials at service headquarters, the Office of the Secretary of Defense, federal regulatory agencies, and nongovernmental environmental groups. Compliance with environmental laws has caused some training activities to be cancelled, postponed, or modified, and DOD has used adjustments to training events, referred to as "workarounds," to accomplish some training objectives while meeting environmental requirements. Some DOD trainers instruct units to pretend restricted training areas are holy grounds, mine fields, or other restricted areas in theater, simulating the need to avoid specific areas and locations when deployed. GAO's review of readiness data for active duty combat units did not confirm that compliance with environmental laws hampers overall military readiness. Since 2006, the Navy has twice invoked the Marine Mammal Protection Act exemption to continue using mid-frequency active sonar in training exercises that would otherwise have been prevented. DOD's exemption from the Migratory Bird Treaty Act, authorizing the taking of migratory birds, eliminated the possibility of having to delay or cancel military training exercises, such as Navy live-fire training at the Farallon de Medinilla Target Range. The exemption to the Endangered Species Act, which precludes critical habitat designation on DOD lands, enables DOD to avoid potential training delays by providing greater autonomy in managing its training lands. On the basis of meetings with officials within and outside DOD and visits to 17 training ranges, GAO found no instances where DOD's use of exemptions from the Endangered Species Act or Migratory Bird Treaty Act has adversely affected the environment, but the impact of the Marine Mammal Protection Act exemption has not yet been determined. The services employ a variety of measures and conservation activities to mitigate the effects of training activities on the natural resources located on DOD lands. Additionally, regulatory officials GAO spoke to said DOD has done an effective job protecting and preserving endangered species and habitats on its installations. However, some nongovernmental organizations have expressed concern that the Endangered Species Act exemption allowing DOD to avoid critical habitat designations may weaken oversight from the U.S. Fish and Wildlife Service. DOD has not presented a sound business case demonstrating the need for the proposed exemptions from the Clean Air Act, the Resource Conservation and Recovery Act, and the Comprehensive Environmental Response, Compensation, and Liability Act. Best practices and prior GAO work recommend that agencies develop a business case that includes, among other things, expected benefits, costs, and risks associated with a proposal's implementation. However, DOD has not provided any specific examples showing that training and readiness have been hampered by requirements of these laws. Meanwhile some federal, state, and nongovernmental organizations have expressed concern that the proposed exemptions, if granted, could harm the environment. Until DOD develops a business case demonstrating the need for these exemptions, Congress will lack a sound basis for assessing whether to enact requested exemptions. |
Since the 1980s, the United States and Canada have been engaged in a trade dispute regarding softwood lumber. One of the main causes of the dispute is differences in costs for timber harvested on public land in Canada as compared with timber from private land in the United States. In Canada, federal and provincial governments own approximately 90 percent of the timberlands and set harvest fees and allocations. In contrast, in the United States, only about 40 percent of the timberland is publicly owned, and the timber from that land is sold through competitive auctions. The U.S. lumber industry is concerned that the use of government-set fees in Canada raises the possibility that private industry in Canada has access to timber at less than market prices. The decades-long softwood lumber dispute has alternated between periods with a softwood lumber trade agreement and periods of litigation without an agreement. In 2006, the United States and Canada ended a period of antidumping and countervailing duty proceedings by signing the Softwood Lumber Agreement, a 7-year agreement with an option for a 2-year renewal. The agreement established a framework for managing Canadian exports of softwood lumber to the United States. Key provisions of the agreement include variable export measures, information exchange requirements, anticircumvention measures, dispute settlement mechanisms, and a settlement agreement to end numerous claims that were pending when the agreement was signed. (App. II contains more information on the provisions of the 2006 Softwood Lumber Agreement.) In 2008, Congress passed the Softwood Lumber Act imposing additional requirements on CBP for monitoring the softwood lumber trade. According to the legislation, the required reconciliations are to ensure the proper operation and implementation of international agreements related to softwood lumber. Furthermore, the importer declaration program established by the act is intended to assist in the enforcement of any international obligations arising from international agreements related to softwood lumber. The act does not contain language specifying an end date for these efforts. Under the act, CBP is to implement the following requirements related to softwood lumber imports from all countries: Importer declaration program: CBP is to establish an importer declaration program requiring that importers from any country declare, among other things, that they have made an appropriate inquiry and that to the best of the person’s knowledge and belief the export price is determined as defined in accordance with the act; the export price is consistent with the export price on the export permit, if any, granted by the country of export; and the exporter has paid, or committed to pay, all export charges. Reconciliation: To ensure the proper implementation and operation of international agreements related to softwood lumber, CBP is to reconcile the export price (or revised export price) declared by the importer with the export price (or revised export price) on the export permit, if any. Verification: To verify the importer declaration, the act requires CBP to periodically verify that (1) the export price declared by the importer is the same as the export price provided on the export permit, if any, issued by the country of export and (2) the estimated export charge is consistent with the applicable export charge rate as provided by Commerce. Semiannual reports: CBP is to report to Congress every 6 months describing the reconciliations and verifications programs and identifying the manner in which the U.S. importers subject to reconciliations and verifications were chosen; identifying any penalties imposed under the act and any patterns of noncompliance with the act; and identifying any problems or obstacles encountered in the implementation and enforcement of the act. As shown in table 1, CBP has taken a variety of steps to implement key provisions of the Softwood Lumber Act of 2008. CBP added three new fields to the U.S. entry form to collect data on the export price, estimated export charge, and importer declaration needed for the reconciliation and verification processes. CBP started enforcing the new requirements imposed by the act in September 2008. The act and CBP require these three data elements for softwood lumber imports from all countries. However, according to CBP officials, only imports from Canada include export charge information because of the 2006 Softwood Lumber Agreement. Furthermore, CBP reported in October 2009 that importers of softwood lumber products from non-Canadian countries have a difficult time in determining the correct amount to list as the export price because the export price definition in the act contains references specific to Canadian softwood lumber, such as “remanufacturer.” To implement the act’s reconciliation requirement, CBP compares publicly available aggregate regional export price data from Canada with aggregate export price data from the U.S. entry form. (Under the act, CBP is reconciling this information only for Canadian exports because Canada is the only country with which the United States has an international agreement specifically on softwood lumber.) As shown in figure 1, CBP obtains the export price from the U.S. entry form, which the U.S. importer should copy from the Canadian export permit. CBP then compares aggregate monthly data from the U.S. entry forms with the publicly available export price data that are posted on the Web site of Canada’s DFAIT. Cdin exporter end Cdin export permit to Deprtment of Foreign Affir nd Interntionl Trde (DFAIT) U.S. CBP compares aggregate information. U.S. importer end U.S. entry form to U.S. Custom nd Border Protection (CBP) Are the total export prices the same? According to CBP officials, on a monthly basis, they reconcile aggregate export price data based on the Canadian region of export. CBP first combines the individual-level export price data from each U.S. entry form for all shipments during a 1-month period and reconciles these values with the aggregate Canadian export data. According to CBP, each month, analysts run a computer program to compare the U.S. and Canadian data and to identify discrepancies. In its October 2009 semiannual report to Congress, CBP reported that the overall variance between the export price on the entry summary form and the export price received from the “country of export” for the 6-month period between October 2008 and March 2009 was 1 percent. As required by the act, CBP has developed processes to verify the importer declaration, which includes verifying that the export price declared by the importer is the same as the export price provided on the export permit, if any, issued by the country of export; the estimated export charge is consistent with the applicable export charge rate as provided by Commerce; and importers have “made appropriate inquiry, including seeking appropriate documentation from the exporter,” and to the best of the importer’s knowledge and belief that the exporter has paid or committed to pay all applicable export charges. To meet these legislative requirements, CBP adapted its existing Entry Summary Compliance Measurement program to include softwood lumber as a subcomponent. The program selects softwood lumber entries for verification via random statistical sampling. When an entry is selected for verification, import specialists at the ports review the entry form to ensure that all of the required information is included and request supporting documentation from the importers to verify that the information on the entry document has been recorded correctly. The import specialists then enter the results into an electronic database system that CBP headquarters accesses and analyzes. In its October 2009 semiannual report to Congress, CBP reported that approximately 82 percent of the samples its officials verified during the first 6 months of the process, from October 2008 to March 2009, correctly reported the export price—with a higher rate, almost 85 percent, for imports from Canada. Regarding the export charge, about 77 percent of the entries CBP sampled from Canada had that value reported correctly. In addition, CBP reported that about 90 percent of the importer declarations were reported properly. According to CBP, the requirements did not apply to an additional 5 to 10 percent of the selected Canadian samples because they were exempt from the provisions of the bilateral trade agreement. Officials stated that the combination of samples that were reported correctly and those for which the requirements were not applicable brought the overall results for the softwood lumber samples for Canada close to what they see for other commodities. Because the importer or customs broker should copy the export price from the Canadian export permit onto the U.S. entry form, CBP officials said they expect discrepancies in the data to result mainly from the following: (1) human errors in copying the export price from one form to another and (2) differences caused by converting from Canadian to U.S. dollars. In addition, CBP officials explained that the export price for a shipment could be listed as one line on the Canadian export permit, but broken into multiple lines on the U.S. entry form. CBP has instructed importers in how to resolve this issue, but officials said that importers sometimes do not perform this calculation correctly. In its October 2009 report to Congress, CBP reported that discrepancies between the export price reported on the Canadian export permit and the export price reported on the U.S. entry form have decreased over time. CBP reported a variance of almost 16 percent between the U.S. and Canadian data in October 2008, the first month of reconciliations under the act. By March 2009, the variance between the U.S. and Canadian export prices had decreased to approximately 2 percent. CBP officials told us that 5 to 10 percent of the entries randomly selected for review as part of the verification process were not recorded correctly due to data entry errors by either the importer or CBP’s import specialists. These errors may have been caused by an import specialist incorrectly recording the verification data in CBP’s database or not following the instructions consistently. CBP officials added, however, that the errors are not surprising considering that the requirements are new, and that the importers and the CBP import specialists are still learning how to correctly record information. We identified the following two reasons for data entry errors: Miscoding: Import specialists manually type specially developed softwood lumber codes into the remarks section of CBP’s existing electronic database system, which could lead to miscoding. For example, preliminary results from the first round of the verification cycle from October 2008 to March 2009 show “over-reporting” for the importer declaration. The verification involves the import specialist obtaining documentation to substantiate the importer declaration. There is no calculation or number associated with the declaration itself; correct reporting would be considered either “not reported” or “reported correctly.” There should be no over- or underreporting. Officials told us they are migrating from the existing system and will be using a new system, Automated Commercial Environment, starting January 2010. They stated that the new system will allow them to create custom data entry fields, which they believe will most likely diminish errors associated with miscoding. Inconsistent application of guidance: Guidance for the import specialists conducting the verifications at the ports states that the export price on the U.S. entry form could be within a 2 percent margin of the export price reported on the Canadian export permit to be considered correctly reported. However, at one of the two ports we visited, we observed that some, but not all, import specialists had inappropriately applied the 2 percent margin to the export charge as well. CBP officials at headquarters stated that they were unaware of the differences in the application of the guidance, but that they were continuing to provide outreach to import specialists regarding how to correctly conduct the verifications and record the results. CBP officials attribute issues with the quality of the data used in the reconciliation and verification processes to the relative newness of the process. The act was enacted in June 2008 and went into effect in August 2008, 60 days later. According to CBP’s May 2009 report to Congress, CBP delayed enforcement of the importer declaration program 30 days, to give CBP time to publish the interim rule describing the new entry requirements and to give the trade community time to make the necessary changes to provide the three new data elements required for each line of softwood lumber articles on the entry form. Industry representatives also said they had very little time to reprogram their computer systems to collect the necessary data. CBP began selecting random samples of softwood lumber entry summaries on October 1, 2008. CBP officials told us they conducted a series of training and outreach programs to educate import specialists and importers on how to correctly fulfill the new requirements the act imposed on shipments of softwood lumber. For example, they established an e-mail box to receive questions and a “Frequently Asked Questions” section on the agency’s Web site to address the new requirements. CBP officials told us they consider the first 6 months of the verification process a dry run to observe the process and determine areas that need improvement. The officials stated that they have ongoing efforts to provide further guidance and clarification. As an example, they cited memorandums sent to import specialists every 6 months identifying specific examples that were entered into the system incorrectly and needed to be corrected. In addition, headquarters conducts quarterly conference calls with staff at the ports and hosts an annual meeting to discuss issues related to the overall Entry Summary Compliance Measurement process used for all commodities, with softwood lumber being one subcomponent of this process. In CBP’s May and October 2009 reports on the agency’s implementation of the act, CBP reported that it undertook extensive changes to its systems to collect the required data elements on the U.S. entry form. The reprogramming of these systems, training personnel, and providing advice to the trade community on changes to the entry form required extensive effort for the agency. CBP further reported that headquarters had to divert resources from import safety, intellectual property rights, and other areas to implement the act. However, CBP officials told us that, now that they have established the reconciliation and verification processes required by the act, the agency’s ongoing efforts related to the act’s requirements do not consume as much time as did its initial efforts. For example, CBP officials at headquarters and at the ports we visited said that work on softwood lumber verifications in particular is not time intensive. CBP, Commerce, and USTR officials stated that the information produced through the reconciliation and verification requirements under the act do not directly help them monitor compliance with the 2006 Softwood Lumber Agreement with Canada. The purpose of some of these legislative requirements is to ensure the proper implementation and operation of international agreements on softwood lumber and assist in the enforcement of these obligations. The 2006 agreement with Canada contains mechanisms for monitoring compliance, and, according to U.S. government officials, the added reconciliation and verification requirements of the Softwood Lumber Act of 2008 do not provide the U.S. government with additional assurance of compliance with the bilateral agreement. Specifically, CBP officials told us the requirements of the act do not provide them with direct assurance that the Canadian exporter paid the export charges owed to the Canadian government under the agreement. CBP officials said that comparing the aggregate export price data from the Canadian export permits with the aggregate export price data from the U.S. entry forms provides no additional information on the collection of the Canadian export charge. CBP does not examine any export charge data in the reconciliation process under the act. The export price, as defined in the act, does not contain any information on the export charge. The export price on the export permit is an estimated price at the time of shipment. According to CBP officials, because the export price on the Canadian export permit and the U.S. entry form is not the final revised export price reported by the exporter to the Canada Revenue Agency, it does not represent the value upon which the export charge is paid. Similarly, CBP officials said the verification process for imports from Canada does not provide the agency with additional information about whether Canadian exporters are complying with the provisions of the bilateral trade agreement, because the U.S. government does not have access to the Canadian government’s tax records and therefore has no means to confirm whether Canadian companies actually paid the export charge. None of the data elements the act requires CBP to verify—the export price, estimated export charge, or importer declaration—provide additional evidence that the exporter paid the export charge, according to CBP officials. As with the reconciliation process, the export price is copied from the Canadian export permit to the U.S. entry form and does not contain export charge information. The estimated export charge on the entry form is reported by the importer based on the estimated export price and Commerce’s determination of the export charge rate for that month and province. Furthermore, the importer declaration only requires importers to affirm that they made the appropriate inquiry that the exporter has paid, or committed to pay, any applicable export charges. Finally, for CBP to impose a penalty on importers who violate the act, CBP is required to prove that the importer committed a “knowing violation.” CBP officials told us that this violation is harder to prove than other violations of customs laws. In October 2009, CBP reported that it has not initiated any penalty actions for violations of the act. The requirements of the act, however, may have an indirect effect on Canadian exporters’ compliance with the bilateral trade agreement, according to USTR and Commerce officials, because the act’s requirements demonstrate that the United States is looking closely at softwood lumber imports. A representative of the U.S. softwood lumber industry said that the act’s requirements may also have improved the accuracy of the Canadian data, and that the importer declaration program is useful because he believes that it provides additional information on whether the export charge was paid. Some of the act’s requirements are to ensure the proper implementation and operation of international agreements on softwood lumber and assist in the enforcement of these obligations. The 2006 Softwood Lumber Agreement is in force until 2013; however, the act does not have an expiration date. As a result, it is unclear whether, or to what extent, CBP will need to continue to implement the U.S. legislative requirements when the bilateral trade agreement expires. CBP officials said they have not yet determined how they will fulfill their requirements under the act when the agreement expires, but assume that they will have to continue implementing the verification and importer declaration requirements. However, without the bilateral trade agreement, CBP would no longer have the data for the export charge calculation that are included as part of the verification process. A senior CBP official said that the agency would probably devote more attention to this issue closer to 2013. One purpose of the Softwood Lumber Act of 2008 is to ensure the proper operation and implementation of international agreements related to softwood lumber. CBP has established mechanisms to comply with its requirements. However, officials from USTR, Commerce, and CBP told us the act’s requirements add little direct benefit to their efforts to monitor compliance with the 2006 Softwood Lumber Agreement, although U.S. officials and some industry representatives stated there may be some indirect benefit resulting from the increased scrutiny of softwood lumber imports from Canada. The act does not state what CBP’s reconciliation and verification requirements would be in 2013—when the bilateral trade agreement is currently scheduled to expire. It is unclear how CBP would implement its continuing requirements under the act and what purpose these requirements would have in the absence of an international agreement. To provide Congress with sufficient time to clarify the U.S. Customs and Border Protection’s requirements under the Softwood Lumber Act of 2008, we recommend that the Secretary of Homeland Security direct the Commissioner of CBP to report to Congress on how the agency plans to fulfill the requirements of the act upon the expiration of international agreements related to softwood lumber. We provided a draft of this report to the U.S. Customs and Border Protection, Department of Commerce, and Office of the U.S. Trade Representative. We received written comments from CBP and Commerce, which are reprinted in appendixes V and VI. CBP concurred with the report recommendation, stating that it will consult with Congress on how to proceed when the Softwood Lumber Agreement expires. Commerce also concurred with the draft report. We also received technical comments from CBP and USTR, which we incorporated as appropriate. We also provided relevant sections to Canadian officials for technical comment, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees, the Secretary of Commerce, the Secretary of Homeland Security, and the U.S. Trade Representative. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Loren Yager at (202) 512-4347 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals who made key contributions to this report are listed in appendix VII. To describe U.S. Customs and Border Protection’s (CBP) processes for meeting the reconciliation and verification requirements of the Softwood Lumber Act of 2008, we reviewed related documents and interviewed CBP officials. We analyzed planning and programmatic documents describing CBP reconciliation and verification procedures, reviewed CBP reports covering the results of its efforts and discussed these results with CBP officials in Washington, D.C. We also traveled to Blaine, Washington, and Buffalo, New York, to interview CBP port officials to determine how they conduct verifications under the act. We met with lumber industry and customs brokers in Washington, D.C.; Blaine; and Buffalo to discuss the impact of the act’s requirements on industry. To better understand how the act’s requirements for reconciliations and verifications contribute to U.S. monitoring of the 2006 Softwood Lumber Agreement, we interviewed knowledgeable officials, and obtained information from the Department of Commerce (Commerce), the Office of the U.S. Trade Representative (USTR), and CBP. We also met with lumber industry representatives and customs brokers in Washington, D.C.; Blaine; and Buffalo to discuss the effect of the act’s reconciliation and verification processes on U.S. government agencies’ monitoring efforts of compliance with the bilateral trade agreement. To update our June 2009 report about the U.S. government’s efforts to monitor compliance with the 2006 Softwood Lumber Agreement, we obtained documents summarizing the LCIA (formerly the London Court of International Arbitration) decisions and agency documents on compliance concerns. We also discussed the status of current compliance concerns with officials from Commerce, USTR, and CBP. Our review focused on Canada because it is the only country with which the United States has an agreement specifically related to softwood lumber and is by far the largest exporter of softwood lumber to the United States. Shipment-level data for the reconciliations under the bilateral trade agreement were not publicly available. GAO did not independently verify the results of these reconciliations done under the agreement. CBP provided data on U.S. imports from Canada at the regional level. We compared these CBP regional-level data with Census data for volume and value to assess the accuracy and consistency of the two data sets. We interviewed officials from Canada’s Department of Foreign Affairs and International Trade (DFAIT) to update the status of Canadian efforts to comply with the bilateral trade agreement and its related coordination efforts with U.S. agencies. We conducted this performance audit from December 2008 to December 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The 2006 Softwood Lumber Agreement established a framework for managing the U.S.-Canadian softwood lumber trade and includes key provisions that are summarized below: Export measures: The agreement allows Canadian regions to choose between two export control systems, with export measures that vary according to the prevailing monthly price of lumber (see table 2). All of the regions were allocated a percentage of U.S. softwood lumber consumption based on the regions’ historic exports to the United States. That share of a region’s U.S. consumption is used by the Canadian government to calculate quotas. Option A consists of an export charge, but no quota. Additionally, a region is subject to a surge penalty if the total volume of exports for that region exceeds its trigger volume, which is calculated, in part, by its share of U.S. consumption in a month. Option B consists of an export charge and a quota. The United States and Canada are required to exchange information to identify changes in Canadian federal and provincial forest management and timber pricing policies. Canada is required to notify the United States of changes made to certain timber pricing or forest management systems and, among other information, provide evidence of how these changes improve statistical accuracy and reliability of a timber pricing or forest management system or maintain and improve the extent to which stumpage charges reflect market conditions. The agreement requires each party to respond to requests from the other for information relevant to the operation of the agreement. The United States and Canada also are required to exchange information to reconcile value and volume data on a region-specific basis. If the two countries are unable to reconcile region-specific aggregated data, the agreement requires the two countries to compare more specific data, including comparing information on the Canadian export permit with that on the U.S. entry summary form. The agreement calls for “complete reconciliation” within 9 months of each quarter where the parties cannot reconcile region-specific data. Anticircumvention: Under article XVII of the agreement, neither party shall take action to circumvent or offset commitments made under the agreement, including any action having the effect of reducing or offsetting the export measures or undermining the commitments set forth in article V. Article XVII(2) of the agreement provides clarification with respect to the types of actions parties consider would or would not reduce or offset the export measures. Some of the actions listed under article XVII(2) include provincial timber pricing and forest management systems as they existed on July 1, 2006, any modifications or updates to those systems that meet specified criteria, and other government programs that provide benefits on a nondiscretionary basis in the form and total aggregate amount in which they existed and were administered on July 1, 2006. For an elaboration of the programs, please see the 2006 Softwood Lumber Act, article XVII(2). Dispute settlement: The agreement has mechanisms to resolve disputes over compliance, which includes arbitration under the auspices of the LCIA. In addition, the agreement ended existing U.S. trade remedy investigations. It also established the Softwood Lumber Committee, with joint Canadian-U.S. representation, and several technical working groups to oversee implementation of the agreement. Because of recent low softwood lumber prices, the Canadian softwood lumber industry has been paying the highest export charge rates mandated by the agreement since the enactment of the agreement. (See fig. 2.) In June 2009, GAO reported on the challenges that U.S. and Canadian officials identified in reconciling the U.S.-entered value and the Canadian export price data. Under the 2006 Softwood Lumber Agreement, the United States and Canada are required to compare and reconcile the import volume and value data from the United States to the export volume and value data from Canada by region on a quarterly basis. As of early November 2009, the two countries had reconciled 6 quarters of volume data but had not been able to fully reconcile the value data for any quarter since the 2006 Softwood Lumber Agreement went into effect. (CBP stated that they planned to have additional meetings with Canadian officials about the reconciliations in November 2009.) We previously reported the factors that U.S. and Canadian officials have identified that make comparing and matching the U.S. import values to Canadian export values challenging. The Canadian value data on the Canadian export permit uses an approximate value determined at the time of shipment based on the export price definition in the 2006 Softwood Lumber Agreement, while the U.S.-entered value on the U.S. entry summary form is defined by statute and is expected to be higher because it may include export charges, which are not part of the Canadian export price data. More broadly, factors that may cause the U.S. values to be different from the Canadian values include the following: (1) inconsistent units of measurement, (2) estimated versus actual values, (3) inconsistent inclusion of export charges in the prices, (4) remanufactured goods, (5) a $500 cap, and (6) a mismatch of shipment dates and entry dates. (For a more detailed discussion of each of these factors, see GAO-09-764R.) CBP officials stated that they have made progress in value reconciliation as the quality of data has improved. They acknowledged that, despite this improvement, larger differences persist at regional levels compared with aggregate countrywide data. CBP officials believe remanufactured goods account for the majority of differences, based on their review of an analysis conducted by Canadian officials. As provided in the 2006 Softwood Lumber Agreement, the U.S. value reported on the U.S. entry summary form is the value of the final finished product, while the Canadian value on the export permit should be the original cost of the wood and should not include the value-added by the remanufacturer. According to CBP, the difference between the value of the original wood and the final product can exceed thousands of dollars. According to CBP officials, they reviewed an analysis by Canada of 1 quarter, which showed that remanufactured goods accounted for about 5 percent of the total value of softwood lumber shipments for that quarter, but 95 percent of the total value discrepancies. CBP officials told us they have not independently analyzed the impact of remanufacturers on the value differences observed in value reconciliation. They told us that they have not yet developed the programming capacity to identify and separate exports from remanufacturers from other exports. Representatives from the U.S. industry group continue to be skeptical of the reconciliation under the bilateral trade agreement and believe Canada may be undercollecting export charges based on its own data analysis. This analysis, using publicly available data from the U.S. Census Bureau, showed that the actual tax collected is consistently lower than the amount that the representatives estimate should be collected. Representatives from the group told us that they do not believe it is possible for the factors identified by the U.S. and Canadian officials to explain the level of differences in the values they observed. The U.S. and Canadian trade data used in the official reconciliation are not publicly available. GAO did not conduct independent evaluation of the reconciliation results. However, CBP provided us with data on U.S. imports from Canada at the regional level. Our analysis comparing the CBP data with the Census data revealed many differences and inconsistencies. For example, the regional differences between CBP value and Census data are not in proportion with the size of exports from the region. Quebec accounts for about 20 percent of the exports from Canada, but close to 40 percent of the value differences between CBP and Census. In addition, the differences between CBP and Census data are usually proportionally larger for the value data than for the volume data. CBP officials stated it is not possible to replicate the official reconciliation using the Census data. U.S. agencies continue to monitor Canada’s compliance with the 2006 Softwood Lumber Agreement and have identified a number of concerns. U.S. agencies monitor compliance through a variety of sources, including notifications from Canada that are required under the agreement, news reports, and provincial and federal government Web sites for announcements of changes to forest policies and programs. According to U.S. officials, they have spent substantial resources to determine whether some Canadian or provincial programs represent a new or substantial change to existing programs that might be exempted from the anticircumvention provision of the agreement. U.S. agencies state that they investigate their concerns and, where appropriate, request additional information from Canada. Should the concerns remain unaddressed, the United States may resort to the dispute settlement mechanisms contained in the agreement, which can include arbitration under the auspices of the LCIA. LCIA decisions regarding Canada’s calculation of volume measures. The first arbitration regarding Canada’s calculation of volume measures began in August 2007 (LCIA Case No. 7941). The Canadian government contended that adjusting U.S. consumption only applied to provinces under the quota provision, and that the adjustment mechanism only applied beginning in July 2007. The United States contended that the adjustment mechanism applied to calculating expected U.S. consumption for all provinces and should have been used beginning the first quarter of 2007. The arbitration tribunal found that, although the adjustment of expected U.S. consumption did not apply to the provinces without a quota, Canada should have begun applying the adjustment mechanism to the provinces with quotas in January 2007. The arbitration tribunal determined that 30 days from the remedy award was a reasonable period of time for Canada to cure its breach of the agreement. Pursuant to the agreement, the arbitration tribunal determined that if Canada failed to cure the breach within the 30 days, as compensation for the breach, Canada shall be required to collect an additional 10 percent export charge on softwood lumber products exported to the United States from the option B regions until they had collected CDN$68.26 million (US$54.8 million). On April 2, 2009, the Canadian government requested arbitration to determine whether its proposed payment of US$34 million plus interest to the United States had cured the breach (LCIA Case No. 91312). The U.S. government did not consider Canada’s offer to make a payment as having cured the breach. In addition, because the United States considered that Canada failed to either cure its breach or impose the compensatory measures determined by the arbitration tribunal, on April 15, 2009, pursuant to the agreement, the United States imposed a 10 percent customs duty on imports of softwood lumber products from Ontario, Quebec, Manitoba, and Saskatchewan. In September 2009, the LCIA issued a decision in which it did not consider Canada’s tender of US$36.66 million (US$34 million plus interest) to the U.S. government as having cured the breach and determined that the remedy required Canada to impose export charges on the involved regions. The LCIA decision did not issue any ruling on whether the United States was required to remove its 10 percent ad valorem customs duty on softwood lumber products from the involved Canadian provinces at this time. The decision encouraged both parties to agree on an amicable settlement regarding this issue. According to Canadian government officials, the Canadian government has developed mechanisms to collect the 10 percent export charge from these provinces. Canada has proposed to the United States that the two countries coordinate on establishing a mutually acceptable date to lift the U.S. duty and impose a Canadian export charge. According to USTR officials, the United States is considering Canada’s proposal. U.S. request to LCIA regarding Ontario and Quebec provincial programs. In January 2008, the United States requested arbitration to determine whether six provincial programs or other measures in Ontario and Quebec circumvent the agreement (LCIA Case No. 81010). The U.S. government contends that these measures include a number of grants, loans, loan guarantees, tax credits, and programs to promote wood production that circumvent the commitments made by Canada in the agreement. Canada maintains that these measures are in full compliance with the agreement. A decision on this case is expected in 2010. Concern about the large amount of low-grade timber harvested in Central British Columbia. U.S. agency officials remain concerned about the large amount of lumber being produced from low-grade timber from the mountain pine beetle-infested British Columbia interior region. Although the grade definitions existed prior to the agreement, U.S. agencies question whether the grading system is being appropriately applied. Lumber producers pay the minimum harvest fee of CDN$0.25 per cubic meter for this low-grade wood. Since the mid-1990s, large sections of central British Columbia have been infested with the mountain pine beetle, a bark beetle that attacks and kills mature lodgepole pine trees. Natural Resources Canada, a federal agency, anticipates that the beetle will kill 80 percent of British Columbia’s mature pine forests by 2013. As a result of the beetle infestation, lumber companies in the British Columbia interior region are currently harvesting a large volume of dead trees. British Columbia’s lumber industry has adopted the practice of heating mountain pine beetle-infested timber to reveal any preexisting cracks, a process that they contend allows for correct lumber grading. U.S. industry contends that this process inflates the amount of low-grade timber and thus reduces cost for British Columbia lumber producers. U.S. agency officials visited British Columbia in summer 2008 to investigate the grading of beetle-killed timber. Subsequently, the United States sent Canada a number of technical questions, including questions on the grading system. In spring 2009, a delegation from British Columbia traveled to Washington, D.C., and briefed U.S. government officials on grading and the mountain pine beetle issues. In October 2009, the delegation again met with U.S. government officials and provided specific responses to each of the outstanding questions that the United States had sent to the province prior to this meeting. According to USTR and Commerce officials, the United States is now reviewing and analyzing these data and other information provided. Concern about reduced fees for harvesting timber in coastal British Columbia. U.S. government officials have questions about the January 2009 reduction in the fees charged for harvesting timber in the British Columbia coast. The British Columbia Ministry of Forests and Range uses an equation, under the coast market pricing system, to determine the fees charged for harvesting timber from public land. The equation is updated annually to account for changes in the market value of timber and in other factors, such as the cost of road construction or replanting trees, and is also adjusted quarterly to reflect changes in market conditions. The equation was grandfathered into the agreement; however, U.S. officials are concerned with how British Columbia has adjusted the equation. According to British Columbia officials, the January 2009 fee reduction was the result of the confluence of the annual and quarterly updates of the timber fee equation. U.S. agency officials have requested additional information from Canada. USTR officials stated in September 2009 that Canadian officials have invited U.S. econometricians to British Columbia to discuss the details of the adjustments with the British Columbia provincial officials who made the adjustments to the equation. Concern about potential abuse of the Temporary Importation under Bond program. CBP headquarters and port officials expressed concern that the Temporary Importation under Bond (TIB) program could be abused by the softwood lumber industry. According to data from CBP, a comparison of TIB imports to total softwood imports shows that TIB represented less than 0.08 percent of total softwood lumber imports for fiscal year 2009. Although officials acknowledge that TIB imports are a small amount of total imports, they stated that they are examining the issue. TIB is a procedure whereby, under defined circumstances, merchandise may enter into a U.S. Custom’s territory temporarily, for a period of up to 1 year. Such goods must be covered by a bond, and the importer must agree to export or destroy the merchandise within a specified time or pay liquidated damages, normally double the estimated duties applicable to the entry. Although softwood lumber products from Canada covered under the Softwood Lumber Agreement are subject to the export measure and export charge, they are not subject to a U.S. import duty. The liquidated damages for products under TIB is limited to $100 per entry. For example, according to CBP port officials in Blaine, some softwood lumber products that enter the United States from Canada under TIB are not required to be accompanied by a permit issued under the Canadian export permit program, because the intent is to manufacture the lumber into wood siding at a U.S. plant. Port officials pointed to the positive economic benefits for local U.S. businesses from such shipments. However, these port officials also raised concerns that they are limited to applying a $100 liquidated damages fee if they are not supplied with proof of export. These officials stated that the $100 liquidated damages would represent a small fraction of the 15 percent Canadian export tax that would normally be applied to softwood lumber exports. The port officials stated that in recent years, about 9 percent of softwood lumber entries at that port were under the TIB program and that for fiscal year 2009, about 5 percent of these entries had not been properly closed out showing export. The officials stated that they are not certain whether the failure to close these TIB movements was a paperwork oversight or represented cases where the goods had stayed in the United States without making formal entry and without paying the Canadian export charge. In addition to the contact named above, Celia Thomas, Assistant Director; Jason Bair; Ming Chen; Karen Deans; David Dornisch; Tim Fairbanks; Rachel Girshick; Grace Lui; and Christina Werth made key contributions to this report. Kate Brentzel and Etana Finkler provided technical support. | In 2006, the United States and Canada signed the Softwood Lumber Agreement. The agreement, among other things, imposed export charges and quotas on Canadian lumber exports to the United States. To assist in monitoring compliance with the agreement, in 2008 Congress passed the Softwood Lumber Act, which imposed several data collection and analysis requirements on the Department of Homeland Security's U.S. Customs and Border Protection (CBP) and required two reports from GAO. This report discusses (1) CBP's processes for meeting the act's requirements and (2) how these requirements contribute to U.S. efforts to monitor compliance with the 2006 Softwood Lumber Agreement. GAO issued a report in June 2009 on U.S. agency efforts to monitor compliance with the 2006 agreement. This report includes an update on these efforts. GAO analyzed information from relevant U.S. agencies, interviewed knowledgeable officials, and discussed these issues with U.S. and Canadian industry representatives. CBP has developed processes to reconcile and verify data provided by the exporter and importer as required by the act, but officials acknowledge continuing issues with data quality. CBP reconciles aggregated export prices from the U.S. entry forms with aggregated export prices from Canadian export permits. To meet the act's verification requirement that the importer has correctly reported the export price, the tax to be paid by exporters to the Canadian government (the export charge), and other information, CBP has created a process within its existing data system to collect these data. However, CBP has acknowledged continuing problems with data quality. For example, CBP port officials manually enter data into this system, which could lead to miscoding. CBP reported that the initial implementation of the act required extensive effort for the agency, but officials stated that ongoing activities need fewer resources. According to CBP, Department of Commerce, and Office of U.S. Trade Representative officials, the information produced through the reconciliation and verification requirements under the act adds little assurance of compliance with the 2006 Softwood Lumber Agreement. Some of the act's requirements are to ensure the proper operation of international agreements on softwood lumber and enforcement of these obligations. The agreement with Canada contains mechanisms for monitoring compliance, and, according to U.S. government officials, the added requirements of the 2008 U.S. legislation do not provide the U.S. government with additional assurance of compliance with the bilateral trade agreement. Specifically, CBP officials told GAO the requirements under the act do not provide the United States with assurance that the Canadian exporter paid the export charge, because the United States does not have access to company-level tax data from Canada. While the agreement is scheduled to expire in 2013, the act does not have an expiration date. CBP officials said they have not yet determined how they will fulfill their requirements under the act when the agreement expires, but they would no longer have the estimated export charge data that are used in implementing the act. |
Funeral homes, cemeteries, crematories, pre-need plans, and third party sales of funeral goods are all various segments of the death care industry, and the federal and state governments both have a role in regulating the industry. In 1999 and 2003, we reported on various aspects of federal and state regulation of the death care industry.things, we stated that with respect to the federal government’s role in regulating the death care industry, aside from the FTC’s Funeral Rule, there is no other regulation that specifically addresses the marketing practices of the death care industry at the federal level; most regulatory Among other responsibilities regarding the industry are handled at the state level. The FTC’s Funeral Rule, which became fully effective in April 1984, provides, among other things, that consumers are entitled to price information about funeral goods and services before they purchase them, which would enable the consumer to use the information for comparative shopping if he or she wishes. For example, the Rule declares it an unfair or deceptive act or practice for funeral providers—that is, any business that sells or offers to sell both funeral goods and funeral services to the public—to (1) fail to furnish accurate itemized price information to funeral consumers; (2) misrepresent federal, state, local, or other requirements related to the provision of funeral goods and services; (3) require consumers to purchase items they do not want to buy; and (4) embalm deceased human remains for a fee without authorization.other things, compliance with the Funeral Rule requires that funeral providers furnish consumers with various price lists. For example, at the beginning of the discussion about arrangements for funeral goods and services, funeral providers must provide the consumer an itemized general price list. Funeral providers must also provide the consumer a casket price list before showing casket options. FTC staff opinions have also clarified various aspects of the Funeral Rule. For example, FTC staff opinions have provided that if a consumer purchases a casket from a third party vendor, a funeral provider cannot require a consumer’s presence when the casket is delivered to the funeral home or charge a fee for certain services, such as storage of third-party caskets delivered a few days before they are needed. Beginning in October 1994, the FTC initiated a test-shopping enforcement approach, called sweeps, targeting funeral homes in a particular region, state, or city. Under this approach, FTC staff in its regional offices, state investigators (such as those from offices of state attorneys general), or other volunteers (such as members of AARP—formerly known as the American Association of Retired Persons) pose as consumers of funeral goods and services—thereby simulating a funeral transaction—to determine if the funeral home is in compliance with the Rule. In 1996, the FTC implemented the Funeral Rule Offenders Program as a nonlitigation alternative to civil penalty actions for Rule violations. Under this program, violators of the Funeral Rule are offered the option to attend the Funeral Rule Offenders Program. Those who choose to enroll in the program must agree to make voluntary payments to the U.S. Treasury equal to 0.8 percent of their average annual gross sales over the prior 3 years and participate in training designed to teach them how to comply with the Rule. A 2009 FTC opinion (Opinion 09-1), explained that while the Funeral Rule generally does not apply to cemeteries, there may be some circumstances in which commercial cemeteries are funeral providers and are obliged to comply with the Rule. For example, if a commercial cemetery provides funeral services and offers or sells funeral goods, it would be obligated to comply with the Rule. A 2008 FTC opinion (Opinion 08-1) provided that a crematory must comply with the Funeral Rule if it offers and sells cremation services and any funeral goods, such as caskets, alternative containers, or urns. According to a 2004 FTC guide, the Funeral Rule also applies to pre-need and at-need funeral arrangements. Sellers of pre-need contracts that act on behalf of a funeral home, but do not provide funeral goods and service, must comply with the Rule. Rights Act was referred to the House Committee on Energy and Commerce, Subcommittee on Commerce, Manufacturing and Trade, in March 2011, and no further action has been taken as of November 2011. We also reported in 2003 that states vary in their approach to regulating the various segments of the death care industry, and that not all segments are subject to regulation in each state. A 2009 FTC consumer guide provides an example of this, stating that laws of individual states govern the prepayment of funeral goods and services but that protections vary widely from state to state and some states offer little or no effective protection. In addition, some states have incorporated the Funeral Rule or aspects of the Funeral Rule into their statutes. While accurate national data are not readily available on how much consumers spend each year on death care transactions, in 2010, AARP reported that funeral expenses are one of the most expensive events in a person’s life. The FTC reported in 2009 that a traditional funeral costs about $6,000, but that many funerals can cost well over $10,000. According to the National Center for Health Statistics, there were over 2.4 million deaths registered in the United States in 2007—the most recent year for which final data were available. The Casket & Funeral Supply Association of America estimated that about 73 percent of the approximately 2.4 million deaths in 2007 resulted in a traditional casket burial. The National Funeral Directors Association reported that the average adult funeral cost was $6,560 in 2009.1,752,000 (73 percent of the approximately 2.4 million deaths in 2009) provides an estimate of over $11.5 billion spent on funeral costs in the United States in 2009. Although the Casket & Funeral Supply Association of America reported that the majority of people selected burial as the means of final disposition in 2007, trends in other methods of disposition are increasing—such as cremations or burials that that have minimal environmental impact. According to the Cremation Association of North America, the number and percentage of cremations is increasing, and the association projects that the national average could be over 55 percent by 2025. Several state and industry officials stated that the increase in cremations can partially be attributed to the downturn in the economy and increased social acceptance of cremation. With the rise in cremations, officials from the Cremation Association of North America stated that the weakened economy has contributed to crematories hiring cheap, untrained labor, which may create accidents or problems and that regulation of this segment of the industry has not kept pace with the increase in cremations. In addition, the use of environmentally friendly or “green” services or burials has received increasing media coverage, and some states have begun to discuss this issue and have proposed or passed legislation specifically addressing environmentally friendly or green burials. Examples of environmentally friendly burials can include the use of caskets or urns that are nontoxic and biodegradable and burials at “green” cemeteries in which the landscape is left in a natural state. Finally, although media reports provide examples of incidents that have occurred in the industry, it is not possible to determine how prevalent these issues are across the death care industry. Our 1999 and 2003 reports on the death care industry found that comprehensive information on consumer complaints was not available because, among other reasons, (1) consumers can complain to a variety of entities and these entities may compile complaint data in various manners and (2) no single entity collects and compiles all complaint data. Further, we reported that not all consumers who experience problems may file a complaint. For example, in our 1999 report, we stated that officials from organizations at all levels told us that factors, such as the emotional component of death, may inhibit a consumer from making a complaint. The challenges in using consumer complaints to determine the extent of the problems that may occur in the death care industry remain the same today. The Funeral Rule has not changed since it went into effect in1994, and according to FTC staff, implementation of the Funeral Rule has generally remained the same since we last reported on the Rule in 2003. The FTC conducts undercover shopping through enforcement sweeps of funeral homes to ensure compliance with the Funeral Rule and to maintain consumer confidence. Since the Funeral Rule Offenders Program was introduced in 1996, the FTC has shopped over 2,400 funeral homes of the approximately 20,000 funeral homes that FTC staff stated are in the United States.percent for all the sweeps conducted since 1996. Since 2004 through 2010, the yearly compliance rate fluctuated from 72 to 91 percent, as shown in table 1. FTC staff stated that they have no reliable basis to determine why the compliance rate is lower in recent years. The FTC reported an overall compliance rate of about 85 Arizona was the only state that reported that some In addition, the majority of state regulators also reported that funeral homes, funeral directors, and embalmers are required to be licensed in their respective states. For example, of the state regulators who responded to these issues on our survey, 37 of the 38 reported that funeral homes are required to be licensed. State regulators reported that licenses were required to be renewed at various frequencies, if at all, although most reported that licenses had to be renewed every at least once every 1 or 2 years. In four of our five case study states, funeral directors and embalmers are required to obtain a license to operate and such applicants are generally required to (1) pay a fee, (2) pass an exam, (3) obtain some level of education, and (4) have some experience—which may be as an intern or an apprentice. Licensees in each of these four states are required to renew their licenses once every 2 years. With respect to inspections, 35 of the 38 state regulators who responded to this issue on our survey reported that the inspection of funeral homes was required, although the frequency of these required inspections varied. In our case study states, for example, Oregon requires that funeral homes be inspected every 2 years and Tennessee requires funeral homes to be inspected once a year. In Colorado and Illinois, state regulators stated that funeral homes are not inspected on a regular basis, although regulators have the authority to do so. Further, the 32 state regulators who responded to our question regarding the number of inspectors they have reported having between one and nine inspectors. However, only 1 of the 26 state regulators who provided information on the percentage of time their inspectors spend inspecting funeral homes reported that the state’s inspectors spend 100 percent of their time inspecting funeral homes. As a result of inspections or other enforcement mechanisms, state regulators reported identifying a wide variety of violations and taking various types of enforcement actions against funeral homes, funeral directors, and embalmers. Of the 38 state regulators who responded to this issue on our survey, 33 reported tracking violations of funeral homes, funeral directors, and embalmers. Twenty-two state regulators provided data on the approximate number of violations identified since 2008, with 14 reporting that there were fewer than 40 violations in their respective states. Of the 26 state regulators who provided narrative responses to our survey about the most frequent violations they identified, the most frequent were violations related to licensing, such as unlicensed or unregistered practice, with 10 state regulators reporting this as being a common violation. Other violations reported by state regulators included those related to unprofessional conduct, FTC Funeral Rule violations, reporting deficiencies, and service issues. Finally, 34 of 38 state regulators who responded to our question about taking enforcement actions against funeral homes, funeral directors, or embalmers reported that they have taken some actions since 2008, including actions ranging from notices of non-compliance and letters of reprimand to suspension of licenses and civil or criminal prosecutions. States also varied in the number of consumer complaints they received regarding funeral homes, funeral directors, and embalmers. Of the 39 state regulators who responded to this issue on our survey, 33 reported that their state tracks data on consumer complaints. For the years 2008, 2009, and 2010, state regulators reported that their state received between 0 and 300 complaints, approximately, each year—although the vast majority reported that their state received fewer than 100 complaints each year. included unlicensed practice, overcharging, and customer service concerns. Further, conducting investigations of legitimate consumer complaints was most frequently reported by state regulators who responded to our survey as being the consumer protection that was most effective in protecting consumers. Common complaints reported in our case study states Most state regulators reported having specific rules or regulations that address some cemeteries that operate in their states. Specifically, of the 42 state regulators who responded to our 2011 survey on the regulation of cemeteries, 37 reported having rules or regulations specific to cemeteries. Further, of the 36 state regulators who responded to the question regarding whether all cemeteries are subject to state regulation, 29 reported that some cemeteries were exempt from regulation in their state. Examples of cemeteries that were exempt from regulation in some states include religious, municipal, family, private, and public cemeteries. Specifically, of the state regulators who provided data on the number of complaints the state received each year, 20 of 24 reported fewer than 100 complaints in 2008, 19 of 25 reported fewer than 100 complaints in 2009, and 21 of 26 reported fewer than 100 complaints in 2010. Not all the state regulators who reported tracking data on consumer complaints provided the total number of complaints for each year. In addition, consumers may have submitted complaints to an agency other than the state regulatory agency that responded to our survey. The number of cemeteries operating in the states is not always known, as 18 of the 37 state regulators who responded to this issue reported that they did not maintain data on the number of cemeteries that operate in their states. Five state regulators provided data on the number of cemeteries that operated in their states—reporting having as few as 124 to as many as 3,600 cemeteries operating in their states. In addition, many state regulators reported that some cemeteries and cemetery operators are required to be licensed in their respective states. Specifically, of the state regulators who responded to these issues on our survey, 22 of the 37 reported that some but not all cemeteries are required to be licensed, 10 reported that no license is required, 4 reported that all cemeteries are required to be licensed, and 1 checked “No response” and 20 of the 37 reported that cemetery operators are not required to be licensed, 11 reported that some but not all cemetery operators are required to be licensed, 1 reported that all are required to be licensed, and 5 checked “No response.” State regulators reported that licenses were required to be renewed at various frequencies, if at all. In our case study states, for example, Tennessee requires cemeteries to renew their license once a year, while Oregon requires cemeteries to renew their licenses once every 2 years. With respect to inspection, 21 of the 37 state regulators who responded to this issue on our survey reported that inspections of cemeteries were not required, and those that did require them reported that the frequency of the required inspections varied. In our case study states, for example, Oregon requires cemeteries to be inspected once every 2 years, while in Wisconsin state regulators have the authority to conduct inspections, but according to Wisconsin regulators, these are not done on a regular basis. The 12 state regulators who responded to our survey question regarding the number of inspectors available to inspect cemeteries reported having between zero and nine inspectors. However, 1 of the 11 state regulators who provided information on what percentage of their time their inspectors spend inspecting cemeteries reported that their inspectors spend 100 percent of their time inspecting cemeteries. As a result of inspections or other enforcement mechanisms, state regulators reported identifying a variety of violations and taking various types of enforcement actions against cemeteries and cemetery operators. Of the 34 state regulators who responded to this issue on our survey, 18 reported tracking violations of cemeteries and cemetery operators, and 11 reported on the approximate number of violations identified since 2008— which ranged from 0 to 122 in their respective states. regulators who provided narrative responses to our survey about the most frequent violations they identify, violations included those related to (1) record keeping, (2) maintenance, (3) unprofessional conduct, and (4) licensing. Finally, 22 of 36 state regulators who responded to our question about taking enforcement actions against cemeteries or cemetery operators reported that they have taken some actions since 2008, including actions ranging from notices of non-compliance to monetary fines and civil or criminal prosecutions. However, as pointed out by one of our case study state regulators, although they receive complaints about cemetery maintenance issues, these issues don’t normally develop into an actual case that could result in an enforcement action. Specifically, 3 state regulators reported that there were no violations, 4 reported between 1 and 15 violations, 3 reported between 50 and 100 violations, and 1 reported 122 violations since 2008. Not all the state regulators who reported tracking data on violations provided the total number of violations since 2008. responded to this issue on our survey, 25 state regulators reported that their state tracks data on consumer complaints. For the years 2008, 2009, and 2010, state regulators reported that their state received between 0 and 113 complaints, approximately, in each respective year regarding cemeteries or cemetery operators, with the majority reporting that their state received 40 complaints or fewer each year. Common complaints reported in our case study states included maintenance issues and incorrect monument placements. Conducting investigations of legitimate consumer complaints was most frequently reported by state regulators as being the consumer protection that was most effective in protecting consumers. Most state regulators reported having rules or regulations that specifically address crematories that operate in their states. Specifically, of the 39 state regulators who responded to our 2011 survey on the regulation of crematories, 35 reported having specific rules or regulations. Four of those that reported having rules or regulations also reported that some crematories were exempt from regulation. The types of crematories that were reported to be exempt included pet crematories and a university medical center crematory. State regulators who provided data on the number of crematories reported having between 7 and 208 crematories operating in their states in 2011. In addition, most state regulators reported that crematories are required to be licensed in their respective states but varied on whether a license was required for crematory operators. Of the 35 state regulators who responded to this issue on the survey, 28 reported that all crematories are required to be licensed, 4 reported that some but not all are required to be licensed, and 3 reported that no license is required and 16 reported that all crematory operators are required to be licensed, 4 reported that some but not all are required to be licensed, and 15 reported that no license was required. State regulators reported that licenses were required to be renewed at various frequencies, if at all, although the majority reported that licenses had to be renewed at least once every 1 or 2 years. In our case study states, for example, Wisconsin requires crematory operators to renew their registration once every 2 years, while Illinois has no requirement for crematory operators to renew their licenses. With respect to inspections, most state regulators reported that inspections of crematories were required. Specifically, 28 of the 34 state regulators who responded to this issue in our survey reported that the inspection of crematories is required, and although the frequency of the required inspections varied, about half required crematories to be inspected at least once a year. Of our case study states, for example, Tennessee requires crematories to be inspected once a year, Oregon requires them to be inspected every 2 years, and according to state regulators in Wisconsin, inspections are done if there is a complaint. The 27 state regulators who responded to our question regarding the number of inspectors they have to inspect crematories reported having between one and nine inspectors. However, 20 of the 22 state regulators who provided information on what percentage of time their inspectors spend inspecting crematories reported that their inspectors spend less than 25 percent of their time inspecting crematories. As a result of inspections or other enforcement mechanisms, states reported identifying a variety of violations and taking various types of enforcement actions against crematories and crematory operators. Of the 35 state regulators who responded to this issue on our survey, 31 reported tracking violations of crematories and crematory operators. Twenty-three state regulators provided data on the approximate number of violations identified since 2008—10 reported no violations, 9 reported 1 to 3 violations, and the remaining 4 reported 15 to 43 violations. Of the 16 state regulators who provided narrative responses to our survey about the most frequent violations they identify, violations included those related to (1) record keeping, (2) the handling of bodies or human remains, (3) obtaining the proper authorization to cremate or issues with following the wishes of the person with control over final disposition, and (4) licensing, such as unlicensed or unregistered practice. Finally, 12 of 34 state regulators who responded to our question about taking enforcement actions against crematories or crematory operators reported that they have taken some actions since 2008, including actions ranging from notices of non-compliance to monetary fines and civil or criminal prosecutions. Most states also track consumer complaints and reported receiving fewer than 10 consumer complaints a year. Specifically, 32 of the 39 state regulators who responded to this issue on our survey reported that their state tracked consumer complaints. For the years 2008, 2009, and 2010, state regulators reported that their state received between 0 and 7 complaints, approximately, in each respective year regarding crematories or crematory operators, with 0 being the most frequent response for each year. regulators reported that their state received no complaints and in 2010 more than 45 percent reported that their state received no complaints. Complaints reported in our case study states included procedural concerns, such as cremating without proper identification tags and not obtaining proper authorization before cremation, and environmental concerns. Conducting investigations of legitimate consumer complaints was most frequently reported by state regulators as being the consumer protection that was most effective in protecting consumers. Not all the state regulators who reported tracking data on consumer complaints provided the total number of complaints for each year. In addition, consumers may have submitted complaints to an agency other than the state regulatory agency that responded to our survey. Most state regulators reported having rules or regulations that specifically address sales of pre-need plans—plans that involve the prearrangement and prepurchase of funeral and cemetery goods and services. Specifically, of the 40 state regulators who responded to our survey on the regulation of the sales of pre-need plans, 38 reported having specific rules or regulations. Seven of the 37 state regulators who responded to the question regarding whether all pre-need sales are subject to state regulation reported that some pre-need sales were exempt from regulation in their state. State regulators reported that the types of pre- need sales that are exempt from regulation in their state included third party sales and sales of cemetery plots. State regulators who provided data on the number of sellers of pre-need plans—which can include companies and their sales agents—in their state reported having up to 1,167 companies that sold pre-need plans and up to 1,697 individual sales agents operating in their states in 2011. One state regulator who responded to this question checked “No response.” there is no requirement in Illinois for pre-need sellers to renew their licenses. In addition, 17 state regulators reported that sellers of pre-need plans are required to be associated with a funeral home or cemetery and 8 reported that a seller must be a licensed funeral director. In our case study states, various methods were used to measure pre- need sellers’ compliance with state laws and regulations. For example, Colorado, Illinois, Oregon, Tennessee, and Wisconsin require pre-need sellers to submit annual reports that state regulators review for various things, such as if funds were properly trusted and abnormal fluctuation in funds. Tennessee also examines pre-need sellers every year, and Colorado also examines the records of pre-need sellers every 5 years. Twenty-five of the 38 state regulators who responded to our question about violations reported tracking violations regarding the sales of pre- need plans. Fifteen state regulators provided data on the approximate number of violations identified since 2008—5 reported no violations, 4 reported between 1 to 5 violations, 4 reported between 50 and 134 violations, 1 reported 379 violations, and another reported 1,578 violations during this time. Of the 17 state regulators who provided narrative responses to our survey about the most frequent violations they identify, violations related to improper trusting or misappropriating funds were frequently cited by state regulators as being a common violation. Other violations mentioned included record keeping issues, unlicensed practice, and contract issues. Finally, 24 of 38 state regulators who responded to our question about taking enforcement actions against sellers of pre-need plans reported that they have taken some actions since 2008, including actions ranging from notices of non-compliance to revocation or suspension of licenses and civil or criminal prosecutions. According to a state regulator in one of our case study states, although the state has revoked about four to five licenses in the last 10 years with the assistance of the attorney general’s office, the state regulatory agency is very limited in the disciplinary actions it is authorized to take and the process is very slow and costly. In addition, some state regulators reported having other consumer protections in their states with respect to sales of pre-need plans— including consumer protection accounts, trusting requirements, and cancellation and transferability of contracts—although these also varied by state. The following briefly discusses each. Consumer protection account. A consumer protection account collects and maintains funds for the benefit and protection of consumers of pre- need plans. Of the 40 state regulators who responded to this issue on our survey, 10 reported that their state has a consumer protection fund that would protect consumers of pre-need plans who suffered financial losses because of issues such as fraud, default, or insolvency. Three of our five case study states reported maintaining consumer protection accounts, although the purpose of these accounts varied. For example, in Illinois and Oregon, funds from these accounts are used for consumer restitution, while in Tennessee, funds from this account are used to support general operation and expenses of the state regulator, as well as any receivership actions initiated. The 9 state regulators who provided data on the maximum amount of funds available in their state’s consumer protection account since 2003 reported having between $30,000 and $1.5 million. Trusting requirements. Trusting requirements refer to the amounts of funds from pre-need sales that are required to be deposited into a trust account. Whereas each state’s laws or regulations will dictate applicable trusting requirements, in general, the seller of pre-need goods or services must deposit a specified percentage of a pre-need sale into a trust account to cover the costs of funeral goods and services at the time of death. Of the state regulators who responded to this issue on our survey, 33 of 38 reported that they have specific trusting requirements, although the percentage of pre-need sales revenue that is required to be trusted for funeral and cemetery goods and services varies. Specifically, as shown in table 2, trusting requirements for funeral goods and services tend to be higher than those for cemetery goods and services across all states whose regulators responded to this issue on our survey. Once funds are deposited, trustees in some states may withdraw funds from the trust for things such as administrative fees. Specifically, of the state regulators who responded to this issue on our survey, 14 reported that administrative fees can be withdrawn, 9 reported that no funds may ever be withdrawn, 7 reported that a specified percentage of the interest can be 5 reported that a specified percentage or amount of trust funds dollars 3 reported that there are no specific requirements, and 2 reported that any funds over 100 percent of the purchase price can be withdrawn. Canceling and transferring pre-need contracts. Of the 39 state regulators who responded to the question on our survey about cancellation requirements, regulators frequently reported that if a consumer cancels a pre-need contract, the consumer is entitled to receive the principal and interest, as shown in table 3. With respect to consumers’ right to transfer pre-need contracts to another state, of the 40 state regulators responding to this issue on our survey, 12 reported that there are no specific requirements in their state, 11 reported that consumers are permitted to transfer their contract to another state and no penalties will apply, 8 reported that consumers can transfer the contract but penalties may apply, 4 reported that consumers are not permitted to transfer their contracts to another state, and 5 checked “No response.” The amount of funds invested in pre-need plans is not always known because not all states track the amount of funds invested in these plans. Eighteen of the 39 state regulators who reported on whether their state tracked the amount of funds invested in pre-need plans stated that they did track this amount, and 9 reported the amount of funds invested in pre- need plans in 2010, which ranged from $100,000 to $2.9 billion. (See app. II for more information on this issue.) Based on information obtained from our case study states, individual contracts may also be tracked in states. In Oregon, for example, sellers of pre-need plans are required to provide numbers for each contract in consecutive order to help ensure that each contract can be tracked. Finally, most state regulators reported that they track consumer complaints regarding sales of pre-need plans and reported receiving some complaints. Specifically, 27 of the 39 state regulators who responded to this issue on our survey reported that their state tracks consumer complaints. For the years 2008, 2009, and 2010, state regulators generally reported that their state received between 0 and 25 complaints, approximately, in each respective year regarding sellers of pre-need plans—although 2 state regulators reported that their state received more than 25 complaints in all 3 years, with 180 complaints being the highest number reported in all 3 years. Common complaints reported in case study states included failure to trust funds and contract disputes. Conducting investigations of legitimate consumer complaints was most frequently reported by state regulators who responded to this issue as being the consumer protection that was most effective in protecting consumers. Most state regulators reported that they did not have rules or regulations specifically addressing third party sellers of funeral goods—which includes retailers of caskets, urns, and monuments that are not affiliated with a funeral home or cemetery. Specifically, of the 41 state regulators who responded to our survey on the regulation of third party sellers, 13 reported having specific rules or regulations, although 1 of these 13 reported that the specific rule or regulation was that all third party sales Further, of the 12 state regulators were prohibited or were restricted. who responded to this issue on our survey, 5 reported that some third party sellers were exempt from regulation in their state. State regulators reported that various third party sales were exempt, including cemetery merchandise sales and immediate-need sales of funeral goods. Three state regulators who responded to this question checked “No response.” Although potentially in conflict with the Funeral Rule, one state regulator we surveyed reported that their state prohibits or restricts third party sales. A state regulator from one of our case study states told us that their state law may be in conflict with the Funeral Rule because funeral homes did not have to accept third party caskets. GAO did not undertake to determine whether the practice of any state conflicts with the Funeral Rule as part of this review. Of the states that have specific rules or regulations for third party sellers, states varied on their licensing requirements. Specifically, of the 12 state regulators who responded to our survey question about licensing requirements, 4 reported that all third party sellers are required to be licensed, 3 reported that some were required to be licensed, and 5 reported that no license was required. According to the seven state regulators who reported that licenses were required, they varied in how often licenses had to be renewed. State regulators reported identifying some violations. Of the 5 state regulators who provided narrative responses to our survey about the most frequent violations they identify, violations related to not properly trusting funds and failure to provide services were among the violations reported. State regulators also reported receiving some consumer complaints. Of the 41 state regulators who responded to our survey question about consumer complaints, 10 reported that their state tracks consumer complaints. State regulators reported that their state received between 0 and 4 complaints, approximately, regarding third party sellers in 2008 and 2010, and between 0 and 10 complaints in 2009. The extent to which state regulators reported that their state (1) had specific rules or regulations, (2) required licensing, and (3) required inspections in both 2003 and 2011 varied by industry segment. For example, with regard to funeral homes, 45 of 48 state regulators (94 percent) who responded to our 2003 survey reported that their state had specific rules or regulations for funeral homes, while 38 of 40 state regulators (95 percent) reported this in 2011. In contrast, with regard to cemeteries, 34 of 44 state regulators (77 percent) who responded to our 2003 survey reported that their state had specific rules or regulations for cemeteries, whereas 37 of 42 state regulators (88 percent) reported this in 2011. Figure 2 shows the number of state regulators who responded to our surveys and the number who reported that their state had specific rules or regulations for each of the industry segments in 2003 and 2011. The extent to which state regulators reported that their state required licenses also varied by industry segment between 2003 and 2011. For example, in 2003 and 2011, almost all of the state regulators who responded to our surveys reported that their state required all funeral homes to be licensed. Specifically, 42 of 43 state regulators (98 percent) who responded to our 2003 survey reported that their state required all funeral homes to be licensed, while 37 of 38 state regulators (97 percent) reported this in 2011. In contrast, there was more variation in state’s licensing requirement for cemetery operators in 2003 compared to 2011. For example, whereas 17 of 33 state regulators (52 percent) who responded to our 2003 survey reported that their state did not require cemetery operators to be licensed, 20 of 32 state regulators (63 percent) reported this in 2011. Figure 3 shows the number of state regulators who reported their state’s licensing requirements by each of the industry segments in 2003 and 2011. Furthermore, the number of state regulators who reported that their state required inspections varied by industry segment between 2003 and 2011. Specifically, 19 of 33 state regulators (58 percent) who responded to our 2003 survey reported that their state did not require the inspection of cemeteries, while 21 of 37 state regulators (57 percent) reported this in 2011. By comparison, with respect to the inspection of funeral homes and crematories, 37 of 43 state regulators (86 percent) who responded to our 2003 survey reported that their state required the inspection of funeral homes, while 35 of the 38 state regulators (92 percent) reported this in 2011, and 33 of 36 state regulators (92 percent) who responded to our 2003 survey reported that their state required the inspection of crematories, while 28 of the 34 state regulators (82 percent) reported this in 2011. Many state regulators who responded to our 2011 survey also reported that their states made changes to their laws or regulations since 2003 that primarily provided clarification or enhanced consumer protections. To a lesser degree, some state regulators also reported that these changes imposed stricter licensing requirements on the various industry segments. State regulators views on the extent to which these changes strengthened their regulatory program varied, as shown in figure 4. State regulators who responded to this issue on our 2011 survey reported that changes came about as a result of various factors, as shown in table 4. Some of these changes, and the reason for these changes, were highlighted in our case study states, as shown below. Pursuant to a Colorado bill passed in 2009, funeral homes and crematories must now register with the state—prior to this, no such requirements existed. According to state regulators, this change came about as a result of lobbying efforts from the death care industry. Officials from a state association stated that the association sought out the regulation of the industry because although it believed that many reputable people operate the industry, a few bad individuals can give the entire industry a poor reputation. In Illinois, the Cemetery Oversight Act, passed in 2010, requires (1) the operators and other specified parties at nonexempt cemeteries to be licensed to operate in the state, (2) cemeteries to conspicuously display a consumer hotline number, and (3) cemeteries to file cemetery maps and enter burials and cremations into a state database. According to state regulators, the bill was passed in response to an incident at an Illinois cemetery where graves were reported to be desecrated and vandalized in a scheme to resell burial plots to unsuspecting members of the public (see more on this incident in app. II). An Oregon law passed in 2009 requires the licensure of death care consultants and the adoption of rules promoting environmentally sound death care practices. According to state regulators, the statutory changes regarding the environmentally sound death care practices will help position them for future technological changes in the industry, such as alternative methods of final disposition. In Tennessee, revisions to death care industry law and regulation in 2007 and 2008 included, among other things, requiring state or commissioner approval for a (1) change in trustee, (2) cemetery sale, and (3) pre-need contract. State regulators stated that these revisions were in reaction to a pre-need incident that involved the looting of about $20 million from pre-need trusts in Tennessee (see more on this incident in app. II). A Wisconsin law passed in 2008 brought more cemeteries under regulation. According to state regulators, as a result of this act, about 1,200 to 1,500 cemeteries fall under registration or licensing requirements, compared to 1991 when 5 cemeteries were required to be licensed. Officials also stated that the genesis for the legislative expansion was a general recognition by the state legislature that it was appropriate to have oversight of more cemeteries and that the effort to pass the law was spearheaded by a Wisconsin industry association. See appendixes III through VII for more detailed information on the changes to state law and regulation in the five case study states. To view the survey covering funeral homes, funeral directors, and embalmers and the responding states’ answers to the survey questions, go to GAO-12-91SP. State regulators’ views on the need for the federal government to take a more active role in the regulation of the death care industry varied. State regulators who responded to this issue on our 2011 surveys frequently reported that they did not believe that there was a need for the federal government to take a more active role in regulating the death care industry; however, several also reported that they believe more federal involvement was needed, as shown in figure 5. State regulators who reported that they believe there was a need for the federal government to take a more active role in regulating the death care industry frequently stated that this was because minimum federal standards would help to provide uniformity across states.reported included (1) states do not have the resources to regulate the industry effectively and (2) minimum federal standards would help to prevent incidents or scandals from occurring. Of the state regulators who reported that they believe that there was not a need for the federal government to take a more active role in regulating the death care Other reasons industry, they frequently reported that this was because the state is a better entity to regulate the industry. Other reasons reported included that current state or federal rules and regulations are sufficient. The FTC and national associations’ support of the Bereaved Consumer’s Bill of Rights Act varied. In a January 2010 statement, the Commission—which is headed by five FTC commissioners—testified that it supported the goals of the Bereaved Consumer’s Bill of Rights Act which would extend the key consumer protections of the Funeral Rule to other segments of the death care industry, including cemeteries. In this statement, the Commission stated that an active enforcement program would be essential to ensure compliance with the requirements of the bill, as was the case with the Funeral Rule. FTC staff noted that although the number of cemeteries is unknown, they anticipate that if the bill were enacted, it would likely double or triple the number of entities they have jurisdiction over. A 2010 report by the Congressional Budget Office states that based on information from the FTC, the agency would require five additional staff positions at a cost of about $1 million per year to develop and enforce the new requirements, train staff, and develop educational materials. The Congressional Budget Office estimates that implementing the legislation would cost the FTC about $5 million over a 5-year period. Three of the six national associations we interviewed—including two industry associations and one consumer association—supported the Bereaved Consumer’s Bill of Rights Act. However, officials representing two of the three associations that supported the bill stated that they were concerned that the proposed law would not require that all cemeteries be regulated. As one consumer association stated, “all grieving consumers are entitled to the same minimum protections.” A fourth association— which represents state regulators—reported that it supported the existence of minimum standards and guidelines that could be established at the federal level, particularly since a comparable level of consumer protection does not exist in all states. However, this association also reported that the regulation of the death care industry should be done at the local level, and as such, the association reported that it would want an opt-out provision in the bill for states with an overall level of protection to consumers equal to or greater than that set forth in the Bereaved Consumer’s Bill of Rights Act. Further, officials representing this association stated that if the bill is enacted, they are concerned that much of the work would be passed to the states. Finally, representatives from one industry association stated that it did not support the Bereaved Consumer’s Bill of Rights Act for various reasons, including that it could impose excessive penalties for some relatively minor matters, such as fines of $16,000 for minor omission of consumer disclosures. Although the level of support for the Bereaved Consumer’s Bill of Rights Act varied, officials representing four of the national associations, including some that supported the bill, stated that they believed the legislation would not ensure that incidents, such as the one that occurred at Burr Oak Cemetery—in which grave sites were reused—would be prevented from occurring in the future. According to documentation from one national association that represents state regulators, the association believes that it is unlikely that a federal system of examinations and investigations would discover such abusive conduct. The association document stated that absent a large appropriation of funds and more staff, the FTC would be severely challenged to establish a large enough footprint to be effective from an enforcement standpoint. This association further reported that it believes that enforcement is the key ingredient in protecting against inappropriate industry practices. Officials representing an industry association stated that they believe that the incidents that occurred at Burr Oak Cemetery involved actions that were already violations of state laws, and while laws and regulations can be enacted to provide consequences for violations, they do not prevent people from findings ways to violate the law or regulation. However, officials representing another industry association stated that they believed the Funeral Rule has dramatically improved the way funeral homes operate— making funeral homes more service oriented and resulting in no major incidents since the Rule was implemented, and if the Rule applied to other segments of the death care industry, these scandals possibly could be prevented. State regulators reported facing some challenges to regulating the death care industry, and their views on the need for their state government to take a more active role in the regulation of the death care industry varied. In responding to our question about the biggest challenge they experience in regulating the death care industry, state regulators frequently reported that insufficient funds and/or staff to enforce regulations was the biggest challenge in enforcing the state’s rules or regulations for each industry segment. However, when asked the degree to which this factor, as well as other factors, posed a challenge, state regulators reported varying views. For example, 18 of the 36 state regulators of funeral homes who responded to this issue reported that insufficient funds and/or staff was either not a challenge or a minor challenge, while 22 of the 36 state regulators of cemeteries who responded to this issue reported this factor was a major or moderate challenge. State regulators views on whether they believe that there was a need for their state government to take a more active role in regulating the death care industry varied, as shown in figure 6. State regulators from three of our case study states reported that their states had sufficient rules and regulations or were on the leading edge of regulation. Regulators from another state stated that they receive very few complaints against death care industry segments they regulate and as a result do not see the need for more regulation of such entities. We provided selected excerpts of the draft report to the FTC and the state regulators from our case study states to obtain their views and verify the accuracy of the information provided. We incorporated their technical comments into the report, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Federal Trade Commission and interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact William O. Jenkins, Jr. at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are list in appendix VIII. To obtain information on the various ways states regulate the death care industry, how it has changed, and to what extent more regulation is needed we developed and administered five web-based surveys to all 50 states to provide current information on state regulation for each of the five death care industry segments—funeral homes, cemeteries, crematories, sales of pre-need funeral plans, and third party sellers of funeral goods. The surveys are similar to surveys we administered in 2003, and are designed to update and expand on information obtained in the earlier surveys. We surveyed state officials in all 50 states that were responsible for regulation or oversight of each of the death care industry segments and asked officials about current state laws and regulations, enforcement mechanisms, and consumer protections. We also solicited their views on the sufficiency of federal and state regulation. Survey development. A GAO social science survey specialist along with staff knowledgeable about the death care industry, including those who conducted work on two previous reports on the industry, developed the five survey instruments. that we developed and administered in 2003 for our work on the death care industry. We reviewed the engagement files from our 2003 work on the death care industry, including records of interviews, survey questions and results, and state regulator contact information—which provided a context and foundation for development of our current surveys. We also conducted interviews with officials at the Federal Trade Commission and six national associations to identify any additional issues to follow up on in the surveys. In addition, we conducted a general news and journal literature search to identify emerging issues or trends we may want to address in the surveys. GAO, Funeral-Related Industries: Complaints and State Laws Vary, and FTC Could Better Manage the Funeral Rule, GAO/GGD-99-156 (Washington, D.C.: Sept. 23, 1999), and Death Care Industry: Regulation Varies across States and by Industry Segment, GAO-03-757 (Washington, D.C.: Aug. 25, 2003). We conducted pretests of each of the five surveys in at least one state to ensure that the questions were clear and concise, and refined the instruments based on feedback we received as a result of the pretest. The bulk of the survey questions were taken from our 2003 survey and were pretested at that time. We wanted to keep many of the original questions without changing them in order to compare the results of the two surveys. In selecting the pretest states we identified several key criteria that guided our selection process. The criteria included (1) ensuring that the state regulates the segment of the industry that the survey addresses and (2) ensuring that the state has made regulatory changes since 2003—so that relevant questions would not be skipped during our pretests. We conducted pretests by phone with officials from Washington State to pretest the funeral home survey, Virginia for the cemetery survey, Kansas for the crematory survey, Mississippi and Louisiana for the sales of pre-need plans survey, and South Carolina for the third party sellers survey. We reviewed the entire survey with the pretest respondents, but focused on selected questions that were new or modified from our prior survey. The state regulatory officials responded to the questions asked, and we discussed any issues regarding clarity or understanding of the question. We made notes of the responses and any issues, and adjusted the questions as appropriate. Identification of survey respondents. To develop an accurate e-mail contact list of state regulatory officials for each of the five segments of the death care industry for all 50 states, we used multiple sources. We started with a list of state regulators from the North American Death Care Regulators Association. We then compared this list to other lists we had obtained, adding additional detail. These other regulator contact lists included our 2003 survey contact list and lists from other national regulator, industry, and consumer associations, including the International Conference of Funeral Service Examining Boards, the Cremation Association of North America, and the Funeral Ethics Organization. We then used our consolidated list to contact, confirm, and update the list of state regulators who would receive our surveys. We contacted the individuals on this list by phone or e-mail to ensure that they were the proper contacts regarding regulation of the identified industry segment. We attempted to identify officials with direct responsibility for regulation. For some states, this meant we identified a number of different officials as survey recipients because some industry segments were regulated by different state entities. In other states, this meant we identified one official as the survey recipient for all five surveys in the state because one entity was responsible for regulation of all segments of the death care industry. However, if the state did not regulate a specific industry segment we sought out the office most appropriate to respond to our questions. In some states more than one entity regulated the same industry segment, and we selected an official from only one entity to respond to our survey. As such, we only collected information from one of the regulatory entities. Survey time frames and response rates. To ensure that we obtained the highest response rate possible, we made the web-based surveys available to the designated state contacts from April 14, 2011, through June 20, 2011. We sent multiple reminders via e-mails and made telephone calls to the state officials requesting that they complete the surveys. Over this period, we obtained responses from 40 states covering regulation of funeral homes, 42 states covering regulation of cemeteries, 39 states covering regulation of crematories, 40 states covering regulation of sales of pre-need funeral plans, and 41 states covering regulation of third party sellers of funeral goods. While the overall response rate was relatively high, not all states that completed the surveys provided responses to all the appropriate questions. Many of the states that responded to our surveys in 2011 were the same states that responded in 2003. The response rate for all surveys in 2011 averaged about 81 percent, while the response rate for 2003 averaged about 90 percent. As shown in table 5, a high percentage of the states that responded to each of the segment surveys in 2011 also responded in 2003. Survey analysis. In analyzing the five surveys, we computed descriptive statistics for all closed-ended survey questions, providing the frequency of specific responses as a proportion of the total number of states for which the question applied. We also developed a closed-ended question data run for each of our five case study states that completed an industry segment survey, and we developed a close-ended data run for each question by state to be included in the e-supplement companion to the report. Open-ended survey responses were compiled for each of the five segment surveys and reviewed for content. In addition, open-ended responses for the most prevalent types of violations were compiled for all states that responded to the question. We also analyzed and compared the responses for survey data for selected questions to the responses we received in 2003. Differences in the responses from year to year could be attributable to the fact that different states responded in 2003 and 2011. Specifically, depending on the industry segment, 85 to 98 percent of those that responded to our 2011 survey also responded to our survey in 2003. Since this was not a sample survey, there are no sampling errors; however, the practical difficulties of conducting a survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question, sources of information available to respondents, or entering data into a database or analyzing them can introduce unwanted variability into the survey results. We took steps in developing the questionnaires, collecting the data, and analyzing them to minimize such nonsampling errors. For example, as mentioned earlier, the surveys were developed by our survey specialist in collaboration with our staff with subject matter expertise. In addition, as stated earlier, we pretested the surveys with various states to ensure that the questions were clear and concise. Since this was a web-based survey, respondents entered their answers directly into the electronic questionnaire, eliminating the need to key data into a database, minimizing the opportunity for error. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error. An independent analyst checked the accuracy of all computer analyses. However, we did not independently verify the accuracy or completeness of the responses to the surveys. This report does not contain all the results from the survey. The survey and a more complete tabulation of the results can be viewed at . The special publication electronic supplement was created in accordance with the above described methodology to show the individual responses by state officials who replied to each of the questions on our five surveys. We conducted this performance audit from October 2010 to December 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. For the purposes of this report, a pre-need plan is defined as a contractual agreement whereby funeral arrangements, cemetery arrangements, or both are preplanned and prepaid for by an individual prior to his or her death. Generally, the pre-need contract is between the individual for whom the services will be provided and the funeral director or cemetery operator. The options for paying pre-need expenses vary from state to state, and according to industry information and state regulators we contacted, the most commonly used options are trust accounts and insurance policies. Upon the individual’s death, the representative of the funeral home or cemetery uses the trusted funds or the amount covered by the insurance policy to provide the designated goods and services. With an insurance-funded plan, generally the consumer purchases, either in a lump sum or by installments, an insurance policy that upon the consumer’s death is paid out to provide the goods and services as specified in the contract. These pre-need insurance policies typically have an increasing death benefit to cover future increases in the prices of funeral goods and services. With a trust-funded plan, the seller is generally required to deposit a certain percentage of the funds into a trust that is established in accordance with state law and managed by one or several trustees. State laws vary on how much of the pre-need funds must be placed into a trust, who can qualify as a trustee—such as a bank or other financial institution—and who receives any interest earned on a trust account. The rationale behind this funding method is that interest earned on the trust account will accumulate over time and can be used to cover all or most of any increases in the cost of the goods and services purchased between the time the trust is established and the goods and services are provided, which may be a number of years. According to state regulators from our five case study states, they have seen more insurance-funded pre-need contracts than trust-funded contracts in recent years. Pre-need insurance policies and trusts are generally regulated in accordance with state laws and regulations, which vary across states and set out specific requirements in terms of sales, licensing of sellers, and trusting or investing requirements. There is no uniform approach among states for regulating the pre-need industry. Pre-need arrangements differ from funeral or cemetery arrangements made at the time of death (often referred to as “at-need” arrangements). Pre-need plans also differ from other methods a consumers may set up to cover some or all of their funeral or burial expenses, such as payable-on- death savings accounts and final expense insurance, which provide funds to pay funeral and cemetery expenses but in which the final disposition arrangements have not been made. Another form of death care service is cemetery perpetual care, also called endowment care. This differs from pre-need plans in that it does not involve direct care of the individual at the time of death. Perpetual or endowment care refers to the general care and maintenance of developed portions of a cemetery and memorials or markers erected thereon and is financed from the income of an established trust fund. Pre-need plans, and their associated contracts, may include a variety of differing provisions. Provisions can vary across states because of differing state laws and regulations, and by individual contract. Table 6 defines some of these provisions. Some industry associations have developed guidelines for pre-need laws that states can consider in developing their laws and regulations. Specifically, the National Funeral Directors Association and the International Cemetery, Cremation, and Funeral Association developed guidelines for pre-need statutes. However, according to one association’s guidelines, the guidelines are not intended to provide states with exact statutory language or to be used in their entirety, but rather are intended to provide states with things to consider when developing requirements. Furthermore, the National Association of Insurance Commissioners published model regulations for advertising life insurance and life insurance disclosures that also mention disclosure requirements for pre- need insurance. In addition, the association published a model regulation that discusses minimum standards for establishing reserve liability and nonforfeiture values.insurance is not well defined and recognizes that what constitutes pre- need insurance is subject to different interpretations by states. To view the survey covering pre-need plans and the responding states’ answers to the survey questions regarding their state’s laws and regulations, go to GAO-12-91SP However, this model regulation states that pre-need . Complete current data on the total funds invested in pre-need plans are not available. According to our survey of state regulators of pre-need plans, less than half of the state regulators responding (18 of 40) reported that they tracked the amount of funds invested in pre-need funeral plans. Of the 18 states reporting the tracking of pre-need funds, 9 provided data on the amount of funds invested in pre-need plans in their state, with amounts for all 9 states totaling $5.9 billion and amounts ranging from $1 million to $2.9 billion in each state. According to a 2011 study by a death care industry consulting company, the estimated total dollars in pre- need accounts may have been nearly $35 billion in 2009. “Thinking ahead can help you make informed and thoughtful decisions about funeral arrangements. It allows you to choose the specific items you want and need and compare the prices offered by several funeral providers. It also spares your survivors the stress of making these decisions under the pressure of time and strong emotions.” “You may wish to make decisions about your arrangements in advance, but not pay for them in advance. Keep in mind that over time, prices may go up and businesses may close or change ownership. However, in some areas with increased competition, prices may go down over time. It’s a good idea to review and revise your decisions every few years, and to make sure your family is aware of your wishes.” “Laws of individual states govern the prepayment of funeral goods and services; various states have laws to help ensure that these advance payments are available to pay for the funeral products and services when they’re needed. But protections vary widely from state to state, and some state laws offer little or no effective protection.” “If you’re thinking about prepaying for funeral goods and services, it’s important to consider these issues before putting down any money: What are you are paying for? Are you buying only merchandise, like a casket and vault, or are you purchasing funeral services as well? What happens to the money you’ve prepaid? States have different requirements for handling funds paid for prearranged funeral services. What happens to the interest income on money that is prepaid and put into a trust account? Are you protected if the firm you dealt with goes out of business? Can you cancel the contract and get a full refund if you change your mind? What happens if you move to a different area or die while away from home? Some prepaid funeral plans can be transferred, but often at an added cost.” “If your family isn’t aware that you’ve made plans, your wishes may not be carried out. And if family members don’t know that you’ve prepaid the funeral costs, they could end up paying for the same arrangements. You may wish to consult an attorney on the best way to ensure that your wishes are followed.” States’ guides provide similar educational information and cautions. For example, a pamphlet published by an Oregon regulator discusses pre- need planning and resources and notes that there are many honest and reputable people and companies that offer pre-need funeral planning. However, the pamphlet also cautions consumers to become educated and obtain complete information on the laws and their rights because “there are unscrupulous con artists who sell overpriced plans or will take your money with no intention of fulfilling their promises.” Likewise, a Massachusetts consumer guide states that preplanning gives consumers the opportunity to shop around and the ability to designate their own preferences. The Massachusetts guide also provides a summary of consumer rights that are protected by law, and includes a checklist for pre-need funeral planning and arrangements. For example, Massachusetts informs consumers that if they wish to prepay for their funerals, the funeral director is to provide a standardized contract approved by the Board of Registration in Embalming and Funeral Directing, which includes an itemized statement of funeral goods and services that is FTC compliant, and a trust document with a bank or insurance policy. Funeral directors are also required to itemize all costs associated with the funeral and burial specifying those items where the cost is guaranteed and those items where the price may change, explain what will happen if they go out of business or if their funeral disclose if they will receive a commission on the sale of an insurance provide written verification of where funds are being held if a trust account is used. Massachusetts’s checklist suggests that, when prearranging and prepaying for a funeral, consumers should use the standardized state contract, know which costs are guaranteed and which may change, know the name of the bank trustee if the contract is funded through a trust account or know the insurance company and policy number if the contract is funded through an insurance policy, know whether the funeral director received a commission if the contract is funded through an insurance policy, know what happens if the funeral home is sold or goes out of business, know whether and how the contract can be changed, know they have 10 days from signing to cancel the contract and get a refund, and notify a family member or legal representative of this arrangement. “Planning in advance for your own disposition after death can spare your loved ones the anguish of making difficult decisions while in a state of grief. Shopping ahead of time, getting correct information, and planning in advance allows you to make informed decisions before you purchase, and may save you money.” Also, like other states, the California guide discusses various aspects of pre-need insurance and trusts, and provides information to educate consumers what is allowable under California law. California’s guide also advises consumers that they should visit and inspect several funeral establishments; and compare services restrictions, rules, and prices; and consider doing the following before they enter into a pre-need contract: ask for a guaranteed price plan and obtain a written estimate of any charges for any items or services that are not included in the plan, ensure that the contract includes a cancellation clause, ask if the funeral arrangements can be transferred to another funeral establishment, and find out where the money is being invested and who the trustees are. Like Massachusetts’ guide, California’s guide suggests that consumers check the license status of a funeral provider, and provides links explaining key terms and required disclosures. The pre-need segment of the industry has also come under increasing scrutiny in recent years because of various allegations of fraud and mismanagement of pre-need funds. These allegations have arisen in various states across the country and involve the potential loss of millions of dollars in consumers’ investments in pre-need trusts or insurance contracts. Some of these allegations involve criminal indictments for wire, bank, and mail fraud, and theft of pre-need deposits, while others involve civil litigation over the potential misappropriation of funds. Some of these allegations are currently pending either as criminal suits, civil suits, or both in various state and federal courts. The following are examples of reported incidents involving the sale of pre-need funeral and cemetery plans. National Prearranged Services, Inc. In 2010, the U.S. Attorney for the Eastern District of Missouri filed a 50-count federal indictment against officials of National Prearranged Services, a Missouri-based company that sold prepaid funeral services. The indictment alleges fraud and other crimes for conduct spanning at least 35 states with approximate losses to purchasers, funeral homes, and state insurance guarantee associations ranging from $450 million to $600 million. Missouri and Texas represent the two most affected states, where, according to media reports, 85,000 pre-need customers were affected. Charges alleged in the indictment included wire, bank, mail, and insurance fraud; money laundering; and multiple conspiracy charges involving the sale of pre-need funeral services. The violations allegedly took place from 1998 to 2008 and involved a number of illegal schemes. For example, the indictment alleged that officials withheld pre-need funds from trust and insurance accounts and removed funds from existing accounts for unauthorized purposes. The indictment also alleged that company employees used white-out or cross-outs to change the names of beneficiaries on pre-need insurance applications, including naming the company as sole beneficiary, in order to extract money without the customers’ knowledge. Additionally, the indictment alleged that the defendants concealed their practices from insurance regulators. As a result of the various fraudulent schemes, the company and its associated entities were unable to meet their mounting obligations and collapsed in 2008. The case is ongoing in federal and various state courts. Forest Hill. In 2007, the District Attorney General for Shelby County, Tennessee and the Commissioner of the Tennessee Department of Commerce and Insurance filed a complaint in Shelby County against the owners and other officials of Forest Hill Cemeteries and Funeral Homes seeking injunctive and other relief. Subsequently, according to a Shelby County District Attorney press release, Clayton Smart—an owner—and others were indicted for violations in Tennessee that took place from 2004 to 2006, and that involved a number of illegal schemes, including money laundering by transferring stolen pre-need funds into accounts in the names of various corporations, entities, or investments owned or controlled by the defendants. According to the press release, the indictment also alleges that the transfers were under the guise of investing the funds for the benefit of the funeral home trusts and the beneficiaries of the prepaid funeral contracts, but in reality, the unauthorized transfers were for the defendants’ benefit, as well as other individuals or relatives. The press release further states that as a result of these transfers, $20 million of the fund were lost. According to a 2010 Office of the Tennessee Attorney General annual report, Clayton Smart agreed to plead guilty to a charge of theft, as part of a global settlement with all the prosecuting jurisdictions. According to a Michigan Office of the Attorney General press release, Clayton Smart also pled guilty to counts of racketeering, embezzlement, and failing to properly trust or escrow funeral or cemetery or prepaid contract funds. The press release states that Clayton Smart embezzled up to $70 million in cemetery trust funds in Michigan. According to an official from the Tennessee Attorney General’s office, the cases for two of the defendants are still pending. Illinois Funeral Directors Association. In 2006, an audit by the Illinois Office of the Comptroller determined that the Illinois Funeral Directors Association’s pre-need trust fund was in trouble and underfunded by nearly $40 million. According to the audit, the association, acting as a trustee for pre-need plans, collected unauthorized excess fees of approximately $9.6 million. At the time, the media reported that the trust fund was responsible for being able to pay for the contracted pre-need funerals of a reported 40,000 state residents at the time of need. State regulators told us that they are currently investigating why the trust fund was in trouble, and trying to figure out who was involved and what, if any, charges should be filed. As of June 2011, state regulators told us that no charges have been filed by the state. In 2009, class action lawsuits were filed by funeral directors who invested pre-need funds into the association’s trust, alleging the fiscal mismanagement of the trust fund. The funeral directors alleged that they lost more than $140 million, and that some funeral homes could go bankrupt because they have been forced to pay the difference between what funerals actually cost and the inadequate amounts available from the trust fund. The California Master Trust. In April 2011, the California Attorney General filed a complaint with the Superior Court of the State of California, County of Los Angeles against the California Master Trust, the California Funeral Directors Association, the Funeral Directors Service Corporation, and other defendants seeking a permanent injunction and restitution to consumers. According to a press release issued by the California Office of the Attorney General, the suit, which seeks to halt illegal activity and seeks restitution of about $14 million with interest, was filed on behalf of the Cemetery and Funeral Bureau of the Department of Consumer Affairs, which regulates the funeral industry in California. The suit was based on a June 2010 audit by the Cemetery and Funeral Bureau that alleges that millions of dollars of consumers’ money paid to the trust was misspent or mismanaged, that defendants paid at least $4.6 million in illegal kickbacks to funeral homes, and that the defendants paid themselves excessive administrative fees. According to the press release, the California Master Trust, which as of April 2011 controlled about $63.5 million, was created in 1985 by the funeral directors to pool the prepaid funeral payments of individual purchasers throughout California. The suit also seeks to wrest control of the trust away from the Funeral Directors Service Corp., a subsidiary of the California Funeral Directors Association, and place it under a new trustee, and seeks a full accounting of the trust’s financial transactions as well as the defendants’ financial transactions with the trust since 2000. Before this complaint was filed, the Funeral Directors Service Corporation, which served as the administrator of the California Master Trust, had filed a complaint in November 2010 with the Superior Court of the State of California, County of Sacramento, which contends, among other things, that the findings of the 2010 audit are incorrect. These cases are still pending in court. One theme among these examples is that pre-need sellers were charged with or sued within the context of violating existing laws covering a variety of illegal activities that did not always focus on the state’s laws or regulations governing pre-need plans. For example, the National Prearranged Services indictment cited charges for wire, bank, mail, and insurance fraud; money laundering; and multiple conspiracy charges involving the sale of prepaid funeral services. The Clayton Smart indictment cited charges of failing to appropriately deposit funds collected, transferring funds to be used for unauthorized purposes, failing to submit accurate records, and money laundering. Many of the charges in these cases could apply to financial transactions associated with any business or industry. Nonetheless, officials from two states, Tennessee and Illinois, involved in the incidents described above told us that their states have made changes to existing laws as a result of the incidents that occurred in their respective states. In 2007 and 2008, the state of Tennessee conducted a major review and revision of its pre-need laws in reaction to the problems that occurred in the state. The officials said that in rewriting their laws, they wanted to be proactive and address any other future issues that potentially could arise. However, the officials told us that “morality cannot be legislated” as determined criminals will find a way carry out their actions. According to state regulators, Tennessee implemented a number of changes to strengthen its state death care laws, such as requiring state or commissioner approval, as applicable, for a change in trustee, cemetery sales, and pre-need contracts and for rollovers from trust funds to insurance. In addition, a pre-need cemetery consumer protection account was established in 2007, and a pre- need funeral consumer protection account was originally established in 2008. In 2010, legislation passed in the state of Illinois amended its pre- need laws as the result of concerns regarding the Illinois Funeral Directors Association’s management of pre-need trust funds. Amendments enacted through the new legislation added a number of consumer protections. These included establishing a consumer protection fund for pre-need funeral plans, requiring an annual notice to all consumers regarding the status of their funds with an explanation of any fees charged by the trustee, clear identification of the trustee or insurance provider as well as the primary regulator of the trustee or insurance provider, and an explanation of the purchaser’s right to a refund. In addition, all pre-need sales are required to be entrusted with an independent trustee that is a corporate fiduciary. In Colorado, two state entities within the Department of Regulatory Agencies regulate some segments of the death care industry. The Office of Funeral Home and Crematory Registration regulates funeral homes and crematories. The office has two staff who spend a portion of their time on these death care-related areas. The Colorado Division of Insurance regulates pre-need contracts. Three staff in the office work on pre-need matters, as well as other insurance-related matters not related to the death care industry. These staff spend about 1/12th of their time on pre-need matters. There are no state issued rules or regulations specific to cemeteries and third party sellers of funeral goods in Colorado. Licensing requirements. Unlike in 2003 when the state regulator we surveyed reported that funeral homes are not regulated, as of 2010, funeral homes are required to be registered to operate in the state. Funeral directors or embalmers are required to practice at a registered funeral home but are not required to be registered with the state. Funeral home applicants for registration are required to, among other things, (1) pay a registration fee, (2) provide a list of services provided at the funeral home, and (3) appoint an individual as designee of the funeral home. Funeral homes must renew their registration each year. According to the state regulator who responded to our 2011 survey on the regulation of funeral homes, there were approximately 187 funeral homes operating in Colorado. Inspection and audit requirements. The state has the authority to investigate activities of a funeral home, but according to officials representing the Office of Funeral Home and Crematory Registration, such inspections are not conducted on a regular basis, as they are generally only done if there is a complaint made against a funeral home. Consumer complaints and violations. According to the state regulator who responded to our 2011 survey on the regulation of funeral homes, the state received four consumer complaints regarding funeral homes in 2010, when it began regulating funeral homes, and identified four violations against funeral homes including ones related to unregistered practice, burying the wrong body, and refusal to release human remains until full payment was received. According to representatives from a Colorado industry association, they also receive consumer complaints and have received about 12 complaints in the last 3 years. Further, the state regulator who responded to this issue on our survey reported issuing two probations since 2008. Cemeteries are not regulated at the state level. This was also the case in 2003 when we surveyed the Colorado state regulator of cemeteries. Licensing requirements. Unlike in 2003 when the state regulator we surveyed reported that crematories were not regulated, as of 2010, crematories are required to be registered to operate in the state. Crematory operators must practice at a registered crematory but do not have to be registered with the state. Crematory applicants for registration are required to, among other things, (1) pay a registration fee, (2) provide a list of services provided at the crematory, and (3) appoint an individual as a designee of the crematory. Crematories must renew their registration every year. According to the state regulator who responded to our 2011 survey on the regulation of crematories, there were approximately 55 crematories operating in Colorado. Inspections and audit requirements. The state has the authority to investigate activities of a crematory, but according to officials representing the Office of Funeral Home and Crematory Registration, inspections are not conducted on a regular basis, as they are generally only done if there is a complaint made against a crematory. Consumer complaints and violations. According to the state regulator who responded to our 2011 survey on the regulation of crematories, the state received two consumer complaints regarding crematories in 2010, when it began regulating the crematories. In addition, the state regulator who responded to our survey reported that there were two violations, and reported that the violations typically involved crematories that were not registered. The state regulator who responded to our survey also reported that since 2008, the state had issued one letter of reprimand. Cremation rate. According to the Cremation Association of North America, Colorado had a 63 percent cremation rate in 2009. Licensing requirements. Sellers of pre-need plans must be licensed to operate in the state. This was also the requirement in 2003, as reported by the state regulator who responded to our survey. Prospective licensees are required to, among other things, (1) pay an application fee and (2) provide documentation demonstrating a net worth of at least $10,000. According to the state regulator who responded to our survey, there were approximately 75 companies that sell pre-need plans—the majority of which are funeral homes, mortuaries, and cemeteries. Licensed pre-need sellers must renew their licenses once a year. Inspection and audit requirements. The Colorado Division of Insurance has the authority to examine and investigate the pre-need contract seller to determine whether the pre-need contracts or forms of assignment comply with the seller’s certification and Colorado law. According to officials at the Colorado Division of Insurance, pre-need sellers are required to submit an annual report to them, and they review these annual reports to verify that the appropriate amount of money is in the trust and that money was properly funded. In addition, sellers are also required to keep records that the Colorado Division of Insurance is required to examine at least once every 5 years—a requirement that was implemented in 2010. Contract and trusting requirements. Various contract and trusting requirements exist in the state of Colorado. Colorado permits both trust-funded and insurance-funded contracts. According to officials at the Colorado Division of Insurance, there are many more insurance-funded pre-need contracts in force in Colorado now than trust-funded contracts because of the rate of return, although the exact number was not provided. Contracts are required to be price guaranteed, and irrevocable and revocable contracts are both permitted. All pre-need contracts sold in Colorado must contain certain information and disclosures to assist consumers. Required information or disclosures include (1) a clear identification of the purchaser and the beneficiary, (2) a complete description of the goods and services purchased, and (3) the cancellation policy. Sellers are required to trust at least 75 percent of the sales of cemetery and funeral goods and services. A trustee must be a chartered state bank, savings and loan association, credit union, or trust company that is authorized to act as fiduciary and that is subject to supervision by the state bank or financial services commissioner or a national banking association, federal credit union, or federal savings and loan association authorized to act as fiduciary in Colorado. After the initial deposit of the funds into a trust—a minimum of 75 percent in Colorado—according to the state regulator who responded to our survey, only administrative fees can be withdrawn by a trustee. If a consumer cancels a contract, penalties may apply and the amount returned to the consumer may depend on when the cancellation is sought or the terms of the contract. Interest earned on a pre-need trust is the property of the contract seller, according to a consumer guide published by the Colorado Division of Insurance. Consumer protection accounts. Colorado does not maintain a consumer protection account for pre-need contracts. Funds invested in pre-need trusts. According to the state regulator who responded to our 2011 survey on the regulation of pre-need sales, Colorado does not track the amount of money invested in pre-need funds. Consumer complaints and violations. According to the state regulator who responded to our 2011 survey on the regulation of pre-need plans, the Colorado Division of Insurance has taken four disciplinary actions against sellers of pre-need plans since 2008, reporting that the most prevalent violations included (1) unauthorized sellers, (2) failure to place funds into a trust, and (3) the use of pre-need contract forms that did not comply with law. Incidents related to pre-need contracts have occurred since 2003 in Colorado. In one incident, a pre-need seller misled consumers and eluded Colorado’s trusting requirements by offering consumers a pre-need plan that included two separate contracts—one for future services and another for the immediate purchase of goods. The seller trusted funds received for the future services contract but not for the goods contract. This issue was uncovered as a result of investigation by the Colorado Division of Insurance after two consumer complaints were filed with the division in 2008. The pre-need seller agreed to pay a fine for violations identified as a result of the investigation. Further, the seller agreed to trust 75 percent of the total sales of both contracts; requiring the seller to increase the amount of money trusted by about $1.5 million. According to officials representing a state industry association, although no consumers had been harmed in this situation, it was the largest amount of money involved in a pre-need incident in Colorado. In 2008, the Colorado Division of Insurance found that another seller did not properly trust funds from 23 pre-need contracts. The seller’s license was suspended and the seller was ordered to trust the funds appropriately and pay a fine. In another case, a funeral home owner was selling pre-need contracts and not trusting any of the funds, even though the individual’s license had been revoked. The Colorado Division of Insurance reported in 2006 that this incident was uncovered as a result of numerous consumer complaints received by the division. According to officials representing a state industry association, consumers lost all of their funds in this case, totaling about $500,000. The funeral home owner was fined. According to officials representing the Colorado Division of Insurance, the division is limited in the amount of authority it has to take punitive actions. The division’s administrative sanctions include a $1,000 fine per incident and the suspension or removal of a practitioner’s license. However, officials stated that they can refer violators to other agencies, such as the local district attorney for criminal prosecution. A person who sells caskets, urns, or other funeral goods but does not provide funeral services is exempt from any requirements in the Colorado Mortuary Science Code. State regulatory officials reported various changes to state laws and regulations regarding the death care industry since 2003. As reported by Colorado state regulators who responded to our 2011 surveys, changes included those that clarified legislation or regulation, enhanced consumer protections, changed the state’s regulatory organization, and imposed stricter licensing requirements. Respondents stated that they believe these changes either slightly or significantly strengthened their regulatory program. Specific examples of these changes are listed below. Pursuant to legislation passed in 2009, funeral homes and crematories are now required to be registered with the state.According to state regulators, this change came about as a result of lobbying efforts from the death care industry. Officials from a state association stated that the association sought the regulation of the industry because although it believed that many reputable people operate the industry, a few bad individuals can give the entire industry a poor reputation. Legislation passed in 2010 authorizes the Colorado Department of Insurance to use independent contractors to review the contracts from all sellers once every 5 years. According to an official from the department, a 2009 sunset review of legislation recommended requiring these examinations of pre-need contracts because of some improprieties of pre-need sellers that had taken place at the time. This official further stated that other than this new requirement, the state statute regarding pre-need has remained largely unchanged since the original 1995 legislation. In April 2011, a law was passed to ensure that alternative methods of cremations, such as alkaline hydrolysis, also fall under the cremation regulations. In Illinois, two state entities directly regulate the death care industry. Some regulatory responsibilities may transition between the two entities under legislative changes that are being made. The Illinois Department of Financial and Professional Regulation has regulatory responsibility for funeral directors, embalmers, and cemetery operators. The department has eight total staff and three investigators. Depending upon how the recently enacted Cemetery Oversight Act is ultimately implemented, additional staff may be added to help with the oversight of cemetery operators. Staff have responsibilities other than dealing with death care-related matters. In addition, two boards within the also assist with regulation. The Funeral Directors and Embalmers Licensing and Disciplinary Board provides advice and recommendations to the department staff upon request regarding rulemaking and disciplinary decisions. The board is made up of seven members appointed by the Secretary—this should include six licensed funeral directors and embalmers, and one public member. The Cemetery Oversight Board consists of the Secretary of the Illinois Department of Financial and Professional Regulation, who serves as chairperson, and eight members appointed by the Secretary; the eight members must include five members who represent segments of the cemetery industry, two members who represent consumer interests, and one member who represents the interests of the general public. The Illinois Office of the Comptroller regulates sales of pre-need plans and crematory operators. The office is authorized 10 staff positions and 10 field auditor positions. According to the state regulator who responded to our survey on the third party sellers of funeral goods, there are no rules or regulations specific to third party sellers of funeral goods in Illinois. Licensing requirements. Funeral homes are not regulated, but funeral directors and embalmers are required to be licensed to operate in the state. This was also the case in 2003 when we surveyed the Illinois state regulator. Illinois offers a joint license for funeral directors and embalmers. Prospective licensees are required to, among other things, (1) pay an application fee, (2) be at least 18 years of age, (3) complete an internship of at least 1 year under a licensed funeral director or embalmer, (4) pass the requisite exam, (5) complete 30 semester hours of college credit, and (6) have an associate’s or baccalaureate degree in mortuary science from an approved program of mortuary science or an equivalent associate’s degree. Funeral directors and embalmers must renew their joint license every 2 years as well as complete 24 hours of continuing education within a 24-month period. In June 2011, Illinois Department of Financial and Professional Regulation officials reported that there were 2,794 funeral directors and embalmers licensed in Illinois. Inspection and audit requirements. The Illinois Department of Financial and Professional Regulation has the authority to conduct inspections and audits. According to department officials, the department has audited funeral homes in the past but has not done so recently because of resource constraints. During these prior audits, officials stated that they would contact staff at a sample of funeral homes and ask them to produce certain information, such as continuing education records, and in some cases, officials visited funeral homes in person. Consumer complaints and violations. Consumer complaints regarding funeral homes and funeral directors are collected by various entities in Illinois. According to the 2009 Cemetery Oversight Task Force report, the Illinois Attorney General receives about 70 complaints each year for cemeteries, funeral homes, and monument companies.the Attorney General, complaints against funeral homes were often based on the failure of the funeral homes to provide promised services, the quality of the products or services, or confusion about the cost of services. According to An official from one industry association stated that it receives about one to two complaints each week. According to this association, it will attempt to resolve the matter if it involves one of its members, and about 75 percent of the time the association is able to do so. Most complaints the association receives are related to a consumer who is unhappy with a service received from a funeral director, but on occasion the association will also get complaints in which consumers claim that they paid for something but did not receive it. Another industry association reported that from January 2002 to June 2011 it had received 305 complaints or inquiries regarding the industry. Complaints consisted of concerns about maintenance, contractual obligations, customer service, business conduct, and general questions. According to officials, the association will intervene on behalf of the consumer regarding complaints against the industry and attempt to resolve these issues. From January 2011 to June 2011, the Illinois Department of Financial and Professional Regulation took disciplinary actions against 11 different funeral directors or embalmers. Specifically, the department took action on 4 of these licensees because they defaulted on an educational loan or did not pay their state taxes, issued cease and desist orders against two funeral directors who were unlicensed, and reprimanded and fined another for failure to implement sufficient protocols to prevent misidentification of cremated human remains. For the remaining 4, the Illinois Department of Financial and Professional Regulation either placed them on probation or revoked their licenses for actions that included violation of regulations, unprofessional conduct, or untrustworthiness. Licensing requirements. Licensing requirements for cemetery operators have changed since we surveyed state regulators in 2003. According to officials representing the Illinois Office of Financial and Professional Regulation, in 2003, cemetery operators had to be audited if their cemetery had a care fund of more than $250,000, and licensed if they were selling pre-need plans and were not exempt. This requirement remained the same until passage of the Cemetery Oversight Act in 2010, which requires cemetery operators and customer service personnel to be licensed to operate in the state. However, according to officials from the Illinois Department of Financial and Professional Regulation, since rules implementing applicable provisions of the act have not been approved as of November 2011 and because trailer bills are being discussed that would change the act, not all requirements under the act have been implemented. As presently enacted, the act exempts or partially exempts some cemeteries from its requirements. Cemetery operators of family burial grounds; cemetery operators that have not engaged in any interments, inurnments, or entombments in the last 10 years and do not accept or maintain care funds; and cemeteries that are smaller than 2 acres and that do not accept or maintain care funds are fully exempt from requirements of the act. Cemetery operators of public cemeteries, religious cemeteries, and cemeteries with 25 or fewer interments, inurnments, or entombments in the prior 2 years that do not accept or maintain care funds may apply for partial exemption from requirements of the act. Under the act, to become a cemetery operator (referred to as a cemetery authority in the act), prospective licensees must, among other things, (1) pay an application fee, (2) establish that he or she is of good moral character, and (3) provide evidence that the applicant has financial resources to comply with maintenance and record-keeping provisions of the act. Prospective licensees for positions of cemetery manager or customer service employee at a licensed cemetery must, among other things, (1) pay an application fee, (2) be at least 18 years of age, (3) complete a high school education or an equivalent, and (4) pass the requisite exam. The act further provides that license expiration, renewal, and other requirements, which have yet to be implemented. According to an official from the Illinois Department of Financial and Professional Regulation, Illinois does not have current data on the number of cemeteries that operate in the state, but will as a result of the Cemetery Oversight Act. Inspection and audit requirements. The Illinois Department of Financial and Professional Regulation has the authority to conduct inspections and audits. In addition, the Cemetery Oversight Act requires that all cemeteries subject to the act submit an annual report to the department, subject to any rules of the department specifying the contents of the required reports. Under the Cemetery Oversight Act, cemeteries are required to keep various records. Cemeteries are required to record burials and cremations in the Illinois Department of Financial and Professional Regulation’s Cemetery Oversight Database. From December 2010 to June 2011, 1,037 cemeteries entered 23,290 burials into the database. The Illinois Department of Financial and Professional Regulation estimates that this is about 75 percent of all required burial entries. According to one industry association, although some cemetery operators of smaller cemeteries were initially concerned that the database would be burdensome, once it was implemented, many operators reported the usefulness of having digital records as a result of the new database. Cemetery operators are required to maintain a cemetery map, detailing items such as the location of all plots. Cemeteries are required to provide consumers with a price list for all cemetery products offered for sale. Consumer complaints and violations. Consumer complaints regarding cemeteries or cemetery operators are collected by various entities in Illinois. As stated previously, the Illinois Attorney General receives about 70 complaints each year regarding cemeteries, funeral homes, and monument companies. The most frequent complaints they receive calls about are regarding cemetery maintenance—such as the upkeep of gravesites—and issues with respect to pre-need contracts. According to officials representing the Illinois Department of Financial and Professional Regulation, the department’s consumer hotline has been in effect since March 2010, and they have received just over 175 calls as of December 2010, but more than half of the calls were not complaints. Of the calls received, 84 were complaints—about 50 of which were related to maintenance and about 30 were related to memorial or marker issues. According to the state regulator who responded to our 2011 survey on the regulation of cemeteries, the complaint hot-lines are one of the state’s most effective consumer protections. As stated previously, an Illinois industry association reported that from January 2002 to June 2011 it had received 305 complaints or inquiries regarding cemeteries and funeral homes. Complaints consisted of concerns about maintenance, contractual obligations, customer service, business conduct, and general questions. Licensing requirements. Crematory operators are required to be licensed to operate in the state. Prospective licensees are required to, among other things, (1) pay an application fee and (2) obtain a certification from an approved training program for all employees who will operate the cremation unit. There are no requirements for crematory operators to renew their licenses. In June 2011, the Illinois Office of the Comptroller reported that there were 102 licensed crematory operators. Inspections and audit requirements. The Illinois Office of the Comptroller has the authority to conduct inspections and audits. In addition, each crematory operator is required to file an annual report with the Illinois Office of the Comptroller. The report must, among other things, provide the total number of cremations performed at the crematory in the prior year and include an attestation by the licensee that all applicable permits and certifications are valid. Consumer complaints and violations. According to the state regulator who responded to our 2011 survey, no consumer complaints regarding crematories were received in 2008, 2009, and 2010 and no violations against crematories or crematory operators were reported since 2008. Cremation rate. According to the Cremation Association of North America, Illinois had a 34 percent cremation rate in 2009. Licensing requirements. Sellers of pre-need plans are required to be licensed to operate in the state. Prospective licensees are required to, among other things, (1) pay an application fee and (2) provide a detailed statement of their assets and liabilities. Further, according to the state regulator who responded to our survey, a licensee must be associated with a licensed funeral home or cemetery. There are no requirements that pre-need sellers renew their licenses. As of June 2011, the Illinois Office of the Comptroller reported that there were 1,042 pre-need sellers licensed in the state. Inspection and audit requirements. The Illinois Office of the Comptroller has the authority to conduct inspections and audits and examine any books or records related to a pre-need licensee. According to officials representing the Illinois Office of the Comptroller, they try to audit pre-need sellers every 4 to 5 years. Given limited resources, officials stated that they try to focus on those businesses with the largest amount of money invested in pre-need. In addition, licensees must file an annual report with the Illinois Office of the Comptroller. According to officials representing the office, they review these annual reports, examining the financial information in the reports to ensure that funds have been properly trusted and there is no abnormal fluctuation from beginning to end of year data. Contract and trusting requirements. Various contract and trusting requirements exist in the state of Illinois. Insurance-funded and trust-funded pre-need contracts are permitted in Illinois. According to officials representing the Illinois Office of the Comptroller, funeral homes began to move from more trust-funded plans to more insurance-funded plans. They explained that from the consumer’s standpoint, consumers who purchase insurance-funded plans are more in control of their funds and such plans are less risky. According to the state regulator who responded to our 2011 survey, irrevocable, revocable, guaranteed, and nonguaranteed pre-need contracts are all permitted in Illinois. However, pre-need cemetery plans must be sold on a guaranteed price basis. All pre-need contracts sold in Illinois must contain certain information and disclosures to assist consumers. Required information or disclosures include (1) a clear identification of the purchaser and the beneficiary, (2) a complete description of the goods and services purchased, and (3) the cancellation policy. Sellers are required to trust 85 percent of the purchase price of outer burial containers; 95 percent of the purchase price of funeral services, personal property, and merchandise; and 50 percent of all cemetery goods and service sales, except outer burial containers (of which 85 percent must be trusted), with a corporate fiduciary. A trustee is generally allowed to withdraw a reasonable fee. In addition, a trustee is required to annually furnish to each purchaser a statement identifying (1) the receipts, disbursements, and inventory of the trust, including an explanation of any fees or expenses charged by the trustee; (2) an explanation of the purchaser’s right to a refund, if any; and (3) the primary regulator of the trust as a corporate fiduciary under state or federal law. With respect to a pre-need cemetery sale, if a seller changes trustees, the trustee must provide written notice of the change to the Comptroller no less than 28 days prior to the change in trustee. According to the state regulator who responded to our survey, consumers can transfer or cancel their contracts but penalties may apply. According to an Illinois Funeral Directors Association guide, unless a contract is made irrevocable, a consumer may cancel a pre- need contract at any time. The penalties for canceling a pre-need contract will be different depending upon when the contract is canceled. According to officials from the Illinois Office of the Comptroller, if money is left over in a trust fund for guaranteed contracts, the money should go to the estate. However, if the contract is nonguaranteed, there is no applicable requirement. Consumer protection accounts. Illinois has two consumer protection accounts—one for pre-need cemetery plans and another for pre-need funeral plans. The cemetery account was created in 1986 and the funeral account in 2010. For each pre-need contract sold, sellers must contribute $5 to the respective account. Funds from these accounts are to be used for consumer restitution. According to officials representing the Illinois Office of the Comptroller, in June 2011, no claims have been made for the funds from the funeral account but the cemetery account was utilized in 2010, although prior to this use, the account had not been used in about 10 years. Funds invested in pre-need trusts. As of June 2011, the Illinois Office of the Comptroller reported that there was over $300 million held in trusts for pre-need funeral plans, over $1.4 billion held in insurance for pre-need funeral plans, and over $71 million held in pre-need merchandise funds. Consumer complaints and violations. Consumer complaints regarding pre-need sellers are collected by various entities in Illinois. According to the state regulator who responded to our 2011 survey on the regulation of pre-need plans, the state had received about 27 consumer complaints regarding pre-need plans in 2008, 46 in 2009, and 21 in 2010. Officials representing the Illinois Office of the Comptroller stated that the common types of complaints received included those related to contract disputes and refund delays. Further, these officials stated that a newer complaint that they are starting to receive is related to pre-need plans that are funded by extended life insurance policies, and that in these cases, consumers are paying a lower monthly payment for a limited time period at the end of which they are to pay the entire remaining balance. If consumers are unable to pay the remaining balance, they are required to continue to make payments but end up paying significantly more than their purchase is worth. Officials stated that they are looking into this issue to determine if there is any violation of the law. According to the state regulator who responded to our 2011 survey on the regulation of pre-need plans, there were approximately 100 violations against licensees for pre-need sales since 2008. The top three most prevalent types of violations noted by the state regulator who responded to this issue were (1) improper entrustment of funds, (2) improper fiduciary oversight of funds or improper withdrawal, and (3) contract language failing to meet statutory requirements. Officials representing the Illinois Office of the Comptroller stated that they are very limited on the types of disciplinary actions that they can take against licensees. For example, to revoke or suspend a license, the process is slow and the proceedings are very costly. Officials compared this to other states where if a licensee doesn’t file the appropriate information, then the license is automatically suspended. Officials noted that they believe that one of the benefits of the Cemetery Oversight Act is that between the Illinois Department of Financial and Professional Regulation and the Illinois Office of the Comptroller, the state will likely be able to take more actions. Illinois Office of the Comptroller officials also stated that they would like to have a licensee lookup system similar to the one the Illinois Department of Financial and Professional Regulation has for its funeral director and embalmer licensees—which is available for public use. The Illinois Office of the Comptroller is discussing doing this but needs to define terms, such as what is considered a significant issue. According to the state regulator who responded to our 2011 survey on the regulation of pre-need plans, other than rules or regulations that generally apply to all businesses, Illinois does not have rules or regulations in place that specifically address third party sellers of funeral goods. State regulatory officials reported various changes to state laws and regulations regarding the death care industry. As reported by the state regulators who responded to our 2011 surveys, changes included those that enhanced consumer protections, provided clarification of legislation or regulation, changed the state’s regulatory organization, and imposed stricter licensing requirements, and that these changes either slightly or significantly strengthened the state’s regulatory program. Specific examples of these changes include the passage of the Cemetery Oversight Act and amending the Illinois Funeral or Burial Funds Act. The Cemetery Oversight Act was passed in 2010 in response to a reported incident at an Illinois cemetery and to address task force recommendations, as indicated by officials from the Illinois Department of Financial and Professional Regulation. Workers at an Illinois cemetery were reported to have desecrated and vandalized graves in a scheme to resell burial plots to unsuspecting members of the public. As a result of these allegations, the Governor created the Cemetery Oversight Taskforce to review the incident and make recommendations. The task force concluded that among other things, the lack of regulatory oversight was a contributing factor to the criminal scheme that occurred at the cemetery. Recommendations made by the taskforce included the following: (1) consolidate the regulatory authority of funeral and burial practices to the Illinois Department of Financial and Professional Regulation, (2) consider the adoption of new legislation that provides for the licensure of cemetery managers and ensure that only qualified persons are authorized to own or operate a cemetery, and (3) consolidate and amend existing statutes. Among other things, the Cemetery Oversight Act requires certain cemetery operators to be licensed to operate in the state, requires that cemeteries conspicuously display the department’s consumer hotline number, requires cemeteries to file cemetery maps, and requires cemeteries to enter burials and cremations into a database developed by the Illinois Department of Financial and Professional Regulation. According to officials from the Illinois Department of Financial and Professional Regulation, because of concerns about the costs to cemeteries in meeting requirements under the Cemetery Oversight Act, particularly smaller cemeteries, no rules have been issued as of November 2011. According to one Illinois industry association, the association supported the act despite knowing that there were concerns and more work would need to be done before the act was fully implemented. An official representing this association stated that, as a result of the incident at the Illinois cemetery, the political climate demanded that some legislation be passed. According to another industry association, laws were already in place, prior to the Cemetery Oversight Act that addressed conduct that occurred at this cemetery. The Illinois Department of Commerce and Economic Opportunity analyzed the potential impact of the Cemetery Oversight Act and agreed that the act would have a significant impact on approximately 119 small businesses.According to officials from the Illinois Department of Financial and Professional Regulation, trailer bills have since been introduced to address the concerns related to the act. The Illinois Funeral or Burial Funds Act was amended in 2010 in response to the concerns regarding an Illinois association’s management of pre-need funds (see more on this issue in app. II). The new law, among other things, (1) established a consumer protection fund for pre-need funeral contracts and (2) required that all pre-need sales be entrusted with a corporate fiduciary that is independent. Although the state regulators reported that changes such as the Cemetery Oversight Act and the amendments to the Illinois Funeral or Burial Funds Act have strengthened Illinois’s regulatory program, some state association representatives in Illinois stated that they believe that these laws would not have prevented incidents similar to those that occurred at the Illinois cemetery or with the funeral association trust. State regulators state that there is no way to be sure if the changes to the laws would have prevented these kinds of incidents, but that there may have been the ability them earlier. Further, state regulators in Illinois stressed the importance of consumer education and whistleblower protections to help prevent and detect future problems. In Oregon, two entities regulate the death care industry. The Oregon Mortuary and Cemetery Board is responsible for regulation of funeral homes, funeral directors, embalmers, cemeteries, crematories, and salespersons of trust-funded pre-need plans, among others. The board is made up of 11 members who are appointed by the Governor. Three members are required to be representatives of cemeteries, 2 members must be licensed funeral service practitioners, 1 must be a licensed embalmer, 1 must be a representative of a crematory, and 4 must be public representatives. In addition, the Oregon Mortuary and Cemetery Board has 5.71 full-time equivalent staff, including one full-time inspector and one full-time investigator. The Division of Finance and Corporate Securities within the Department of Consumer and Business Services has regulatory responsibilities for pre-need trust accounts. The Department of Consumer and Business Services has a 0.8 full-time equivalent staff for registration, examination, and regulation of pre-need trusts and contracts. There are no rules or regulations specific to third party sellers of funeral goods in Oregon. Licensing requirements. Funeral homes, funeral directors, and embalmers must be licensed to operate in the state, which was also was the case in 2003, as reported by the Oregon state regulator responding to our survey. Funeral home applicants must, among other things, (1) pay a fee and (2) disclose that the home will be operated by licensed funeral service practitioner. Prospective funeral director and embalmer licensees are required to, among other things, (1) pay a fee, (2) pass an exam, and (3) have 1 year of prior experience. Embalmers licensees must have also graduated from an accredited program of funeral service education, and funeral service practitioners must generally have graduated from an appropriate associate’s degree program. Funeral home, funeral director, and embalmer licensees are required to renew their licenses every 2 years. According to the state regulator responding to our 2011 survey on the regulation of funeral homes, there were approximately 200 funeral homes operating in Oregon. Inspection and audit requirements. The Oregon Mortuary and Cemetery Board is required to inspect funeral homes every 2 years, but may also conduct random inspections at other times. In conducting inspections, the board utilizes a standard funeral establishment inspection checklist. According to an official representing the board, items on the checklist include (1) sanitation of the facility, (2) whether the facility’s license is posted, (3) whether price lists are available and contain required disclosures and no misrepresentations and (4) the accuracy and completeness of arrangement records. According to officials representing the Oregon Mortuary and Cemetery Board, resources are not available to do many on-site inspections. Consumer complaints and violations. According to the state regulator responding to our 2011 survey on the regulation of funeral homes, the state received approximately 60 complaints in 2008, 45 in 2009, and 43 in 2010. According to an official from the Oregon Mortuary and Cemetery Board, the typical complaints received regarding funeral homes involve (1) overcharging; (2) inappropriate conduct, such as being rude; (3) unlicensed activity; and (4) misrepresentation, such as telling consumers something is required when it is not or advertising that they have the lowest prices when they do not. As a result of complaints, applications, or inspections, an official from the Oregon Mortuary and Cemetery Board reported that the board had 105 cases opened against funeral homes over a 3-year period—2008, 2009, and 2010. Further, the state regulator who responded to this issue on our survey reported taking various enforcement actions since 2008, including 42 non-compliance actions, 8 fines, 4 probations, 4 revocations of licenses, and 41 civil or criminal prosecutions. Licensing requirements. In general, a person may not conduct the business of operating a cemetery without receiving a license—that is, a certificate of authority. In general, “operating cemeteries”—defined as cemeteries that (1) perform interments; (2) have fiduciary responsibilities for endowment care, general care or special care funds; or (3) have outstanding pre-need service contracts for unperformed services—must be licensed. Cemetery applicants must, among other things, pay a fee and pay biennial renewal fees. Exempt operating cemeteries—cemeteries with 10 or fewer interments annually—are not required to pay the biennial renewal fees but must maintain licensure and pay a principal fee when a new manager is assigned. A nonoperating cemetery that is not a historic cemetery—any burial place that contains the remains of one or more persons who died prior to February 14, 1909—must be registered with the Oregon Mortuary and Cemetery Board but does not need to be licensed. According to an official representing the Oregon Mortuary and Cemetery Board, there were at least 454 operating cemeteries in Oregon. Inspection and audit requirements. The Oregon Mortuary and Cemetery Board is required to inspect licensed cemeteries every 2 years, but may conduct random inspections at other times as well. In conducting inspections, the board utilizes a standard cemetery inspection checklist. According to an official representing the board, items on the checklist include whether a cemetery map, cemetery rules, and required records of internment and ownership exist. According to officials representing the Oregon Mortuary and Cemetery Board, resources are not available to do many on-site inspections. In addition, licensed cemeteries are required to keep a detailed, accurate, and permanent record of all transactions that are performed for the care and preparation and final disposition, including all remains interred or cremated and the name of the purchaser, among other information. Consumer complaints and violations. Consumer complaints regarding cemeteries and cemetery operators are collected by various entities in Oregon. According to the state regulator who responded to our 2011 survey, the state received approximately 23 complaints against cemeteries in 2008, 15 in 2009, and 15 in 2010. According to an official representing the Oregon Mortuary and Cemetery Board, this does not include cases opened against applicants for licensure and individuals licensed. representing the Oregon Mortuary and Cemetery Board, the typical complaints received against cemeteries include failure to maintain grounds; operating without a license; failure to follow through with agreed- upon arrangements, such as markers not being installed in a timely matter; double selling a grave; inaccurate record keeping; and failing to properly supervise pre-need salespersons. Further, the state regulator who responded to this issue on our survey reported taking various enforcement actions since 2008, including 7 non-compliance actions, 2 fines, and 20 civil or criminal prosecutions. According to guidelines from the Oregon State Mortuary and Cemetery Board, with the exception of egregious or continuing violations, deficiencies noted during routine inspections rarely lead to formal disciplinary action. Licensing requirements. A person may not conduct the business of operating a crematory without receiving a license—that is, a certificate of authority. In 2003, the Oregon state regulator who responded to our survey provided that crematories were required to be licensed to operate in the state, but that crematory operators were not required to be licensed. Crematory applicants must, among other things, pay a fee. Certificates of authority require renewal every 2 years. According to the state regulator who responded to our 2011 survey on the regulation of crematories, there were approximately 65 crematories operating in Oregon. Inspections and audit requirements. The Oregon Mortuary and Cemetery Board is required to inspect crematories every 2 years, but may conduct random inspections at other times as well. In conducting inspections, the board has a standard crematory inspection checklist. Items on the checklist include (1) whether the crematory license is posted and (2) whether documentation of permanent records of all transactions performed for final disposition exists. According to officials representing the Oregon Mortuary and Cemetery Board, resources are not available to do many on-site inspections. Consumer complaints and violations. Consumer complaints regarding crematories and crematory operators are collected by various entities in Oregon. According to the state regulator who responded to our 2011 survey on the regulation of crematories, the state received approximately five complaints in 2008, two in 2009, and two in 2010. An official representing the board told us that the most frequent complaints received against crematories included failure to provide the family with the deceased’s personal items and cremating without required identification tags. Officials also stated that with the increase in the cremation rate over time, there has been an increase in complaints against crematories. As a result of complaints, applications, or inspections, officials representing the Oregon Mortuary and Cemetery Board reported that they had 16 cases opened against crematories over a 3-year period—2008, 2009, and 2010. Further, the state regulator who responded to this issue on our survey reported taking various enforcement actions since 2008, including two non-compliance actions, five fines, and five civil or criminal prosecutions. Cremation rate. According to the Cremation Association of North America, Oregon had a 69 percent cremation rate in 2009. Licensing requirements. An entity wanting to sell pre-need trusts must be certified by the Department of Consumer and Business Services (certified providers) and a salesperson employed by a certified provider must be registered with the Oregon Mortuary and Cemetery Board either as a pre-need salesperson or, among other things, licensed as a funeral service practitioner or embalmer. Individual sellers of insurance-funded pre-need plans must be insurance agents or providers, and are regulated by the Insurance Division of the Department of Consumer and Business Services. A master trustee—an entity that is not a certified provider but that has fiduciary responsibility for the uniform administration of funds delivered to it by a certified provider for the benefit of purchasers of pre- need contracts—must also be registered with the Department of Consumer and Business Services. Pre-need salespersons are required to renew their registration with the Oregon Mortuary and Cemetery Board every 2 years. Submission of required annual reports and fee payments constitutes renewal of a certified provider’s and master trustee’s registration. According to the state regulator who responded to our 2011 survey on pre-need plans, there were approximately 905 salespersons and 223 companies (entities) licensed and operating in Oregon. Inspection and audit requirements. The Department of Consumer and Business Services has the authority to audit the records of a certified provider or a master trustee. In addition, certified providers and master trustees are required to file an annual report with the Department of Consumer and Business Services. According to department officials, they review these reports to identify the types of investments each master trustee is using and to ensure that certified providers have properly trusted funds they have received from consumers. Contract and trusting requirements. Various contract and trusting requirements exist in the state of Oregon. According to state regulators, trust-funded and insurance-funded pre- need plans are permitted in Oregon. According to the state regulator who responded to our 2011 survey, irrevocable, revocable, guaranteed, and nonguaranteed pre-need contracts are all permitted in the state but are not required. Each pre-need contract sold must contain certain information and disclosures. Required information or disclosures include (1) the purchaser, (2) a complete description of the goods and services purchased, (3) the master trustee or depository that will be holding the funds, and (4) identification of whether the contract is guaranteed or nonguaranteed. If the contract is guaranteed, the purchaser must disclose that they are allowed to retain 10 percent of the contract sales price. In addition, pre-need sellers are required to number each contract in consecutive order so they can be individually tracked. According to officials from the Department of Consumer and Business Services, certified providers must deposit pre-need funds with one of the state’s seven master trustees or in a depository (financial institution or trust company). Pre-need trust funds that are placed in a depository may only be invested in one of the following: (1) certificates of deposit; (2) U.S. Treasuries; (3) issues of U.S. government agencies; (4) guaranteed investment contracts; or (5) banker’s acceptance or corporate bonds rated A or better, as specified in statute. Further, officials state that there are no limitations on the investment of pre-need trust funds placed with a master trustee. For guaranteed pre-need contracts, 90 percent of the amounts received for the costs of funeral and cemetery goods and services must be trusted. For nonguaranteed contracts, 100 percent of the costs of funeral and cemetery goods and services must be trusted. If a cemetery is not a certified provider, it must have a surety bond and must trust at least 66-2/3 percent of the costs of cemetery goods, such as vaults and markers that will be installed in an endowment care cemetery. Master trustees may pay certain fees and expenses from the earnings of the trust, limited to 2 percent of the value of the trust per year and subject to other conditions established in statute. According to officials from the Department of Consumer and Business Services, irrevocable trust-funded pre-need plans cannot be converted to insurance-funded pre-need plans. A consumer may cancel a revocable contract at any time and is entitled to receive the principal invested, plus any interest that has accrued, less any amount for service performed or merchandise delivered. Consumer protection accounts. Oregon has a funeral and cemetery consumer protection account, which is funded from a $5 fee assessed for each pre-need contract sold. The purpose of the fund is to provide purchasers who have suffered pecuniary loss arising out of pre-need contracts with an opportunity for restitution if the provider does not have the assets or means to meet these obligations. According to officials representing the Department of Consumer and Business Services, the account was recently utilized after a small cemetery never properly trusted its funds. This was discovered when the cemetery was sold to a new owner. In November 2011, officials from the Department of Consumer and Business Services stated that to date, the fund has been used to compensate more than 160 of these purchasers for a total of over $248,000. Further, officials stated that prior to these recent payouts, the account had just over $1.1 million in it in 2010—the maximum amount in the account since 2003. Funds invested in pre-need trusts. Officials from the Department of Consumer and Business Services reported in November 2011 that there was approximately $108 million in pre-need funds invested through master trustees and depositories in Oregon in 2010. According to state regulators, some pre-need trust accounts have lost interest income because of the economy, but no trusts have lost principal. Consumer complaints and violations. Consumer complaints regarding pre-need contracts are collected by various entities in Oregon. As a result of complaints or inspections, officials representing the Oregon Mortuary and Cemetery Board reported that they had 14 cases opened related to pre-need sales complaints against providers certified by the Department of Consumer and Business Services or against individual board licensees selling pre-need arrangements over a 3-year period—2008, 2009, and 2010. Further, officials from the Department of Consumer and Business Services reported in November 2011, that they identified approximately 171 violations since 2008, with the top three types of violations as follows: (1) contract funds not being trusted, (2) merchandise being listed as delivered when it was not, and (3) misrepresentation of a guaranteed versus a nonguaranteed contract. According to state regulators we surveyed in 2003 and 2011, other than rules or regulations that generally apply to all businesses, Oregon does not have rules or regulations in place that specifically address third party sellers of funeral goods. State regulatory officials we surveyed reported various changes to state laws and regulations regarding the death care industry. As reported by Oregon state regulators who responded to our 2011 surveys, changes included those that clarified legislation or regulation, enhanced consumer protections, and imposed stricter licensing requirement, and these changes either slightly or moderately strengthened the state’s regulatory program. Respondents reported that these changes were a result of lobbying efforts of the death care industry and proposals from state regulatory agencies. Specific examples of these changes include provisions of laws passed in 2007 and 2009. A bill passed in 2007, gave the Department of Consumer and Business Services authority to (1) issue emergency orders to restrict or suspend certain certificates or registrations or to order certified providers or master trustees to cease and desist from specified conduct, and (2) to appoint a successor certified provider for a cemetery or funeral home if, among other reasons, it is appropriate to protect the interests of the purchasers and beneficiaries of pre-need contracts. An Oregon law passed in 2009 (1) required the licensure of death care consultants, (2) provided for the establishment of rules promoting environmentally sound death care practices, and (3) expanded the definition of cemetery. A death care consultant is defined as an individual who offers, for payment, consultations directly relating to the performance of funeral or final disposition services. Prospective death care consultant licensees are required to, among other things, pay a fee and pass an exam. Death care consultants must renew their licenses every 2 years. According to state regulators, the statutory changes regarding the environmentally sound death care practices will help position the state for future technological changes in the industry, such as alternatives methods of final disposition. In addition, with the passage of this law, the definition of a cemetery was expanded to include a scattering garden or other designated area above or below ground where a person may pay to establish a memorial of cremated remains and a cenotaph where the primary purpose is to provide an area where a person may pay to establish a memorial to honor a person whose remains may be interred elsewhere or whose remains cannot be recovered. In Tennessee, two entities within the Department of Commerce and Insurance regulate the death care industry. The Tennessee State Board of Funeral Directors and Embalmers regulates funeral homes, funeral directors, embalmers, and crematories. The board has seven members who are appointed by the Governor—six of these members are required to be licensed funeral directors and the other member must not be affiliated with the funeral business. In addition, the board has three field representatives who inspect funeral home and crematory establishments, two administrative staff, one litigator, and one staff attorney shared with Burial Services. Burial Services regulates cemeteries and pre-need sales. Burial Services has four auditors who examine cemeteries and pre-need sellers, two administrative staff, and one staff attorney it shares with the Board of Funeral Directors and Embalmers. There are no rules or regulations specific to third party sellers of funeral goods in Tennessee. Licensing requirements. Funeral homes, funeral directors, and embalmers are required to be licensed to operate in the state, which was also was the case in 2003 as reported by the Tennessee state regulator who responded to our survey. Funeral home applicants must, among other things, pay an application fee and provide a list of all employees. Prospective funeral director and embalmer licensees must, among other things, (1) pay an application fee, (2) pass an exam, (3) complete a specified number of hours in a funeral service or mortuary sciences program, and (4) complete an apprenticeship. Funeral homes, funeral directors, and embalmers are required to renew their licenses every 2 years. In addition, each funeral director and embalmer licensee is required to complete 10 hours of continuing education during each licensing period. According to officials representing the Tennessee Department of Commerce and Insurance, there were 562 establishments operating in Tennessee—which includes funeral homes, crematories, and embalming services. Inspection and audit requirements. The Board of Funeral Directors and Embalmers is required to inspect funeral homes once a year. Board inspectors use a standard report form in conducting inspections, and inspect for cleanliness, documentation of licensing records, and Funeral Rule compliance, among other things. In addition, according to officials representing the Tennessee Department of Commerce and Insurance, inspectors have compared the price lists to invoices for a select number of sales to ensure that the funeral home is charging according to its price lists. Officials stated that this type of comparison is not done during FTC sweeps, but told us that these types of inspections can help to uncover problems. Consumer complaints and violations. According to officials representing the Tennessee Department of Commerce and Insurance, they received 21 complaints regarding the death care industry in 2008, 30 in 2009, and 54 in 2010, but they did not break this information down by industry segment. According to officials, the most common violations they encounter during inspections are (1) overcharging, (2) improper wording, or (3) direct cremation sales that were also charged basic service fees. Officials further stated that since 2008, they have taken 210 disciplinary actions against the segments they regulate. Licensing requirements. Cemeteries are required to register to operate in the state unless they are otherwise exempt. The following cemeteries are exempt: (1) cemeteries owned by municipalities; (2) cemeteries owned by churches, associations of churches, or church governing bodies; (3) cemeteries owned by religious organizations; (4) family burial grounds; and (5) cemeteries owned by general welfare corporations. Cemetery applicants are required, among other things, to (1) pay a filing fee and (2) show proof of a cemetery map showing all interment sites. Cemeteries are required to renew their licenses every year. Officials representing the Department of Commerce and Insurance told us that Tennessee tried to pattern many of its cemetery rules after the FTC’s Funeral Rule in an effort to level the playing field for the funeral home and cemetery segments of the death care industry. Inspection and audit requirements. The Commissioner of the Department of Commerce and Insurance is responsible for auditing cemetery records at least once every 2 years. In addition, cemeteries are required to keep record of (1) every burial that shows the date, name, and location and (2) every interment site or right sold. All cemeteries that apply for a new registration after 2007 must develop and maintain a cemetery map that shows the location of sites for interment. Consumer complaints and violations. According to officials representing the Tennessee Department of Commerce and Insurance, they received 21 complaints regarding the death care industry in 2008, 30 in 2009, and 54 in 2010, but they did not break this information down by industry segment. Licensing requirements. A crematory may not operate until it has been issued a license as a funeral establishment. This was also the case in 2003, as reported by the state regulator who responded to our survey. Crematory operators do not have to be licensed, but according to officials representing the Tennessee Department of Commerce and Insurance, crematories must employ at least one full-time licensed funeral director who manages the crematory. Crematory applicants are required to, among other things, pay an application fee and provide a list of all employees. Crematories are required to renew their licenses every 2 years. According to officials representing the Tennessee Department of Commerce and Insurance, there were 43 crematories operating in Tennessee. Inspections and audit requirements. The Board of Funeral Directors and Embalmers is required to inspect crematories once a year. Board inspectors use a standard report form in conducting their inspections. The form includes items such as whether cremation records are acceptable. Consumer complaints and violations. According to officials representing the Tennessee Department of Commerce and Insurance, they received 21 complaints regarding the death care industry in 2008, 30 in 2009, and 54 in 2010, but they did not break this information down by industry segment. Officials told us that the most common violations they see during inspections are (1) overcharging, (2) improper wording, or (3) direct cremation sales that were also charged basic service fees. Officials further stated that since 2008, they have taken 210 disciplinary actions against the segments they regulate. Cremation rate. According to the Cremation Association of North America, Tennessee had a 23 percent cremation rate in 2009. Licensing requirements. Sellers of pre-need plans are required to be registered to operate in the state. To register, sellers must complete an application and pay a fee. Pre-need sellers are required to renew their registration every 2 years. According to officials representing the Tennessee Department of Commerce and Insurance, there were 495 pre- need sellers operating in Tennessee. Inspection and audit requirements. According to an official representing the Department of Commerce and Insurance, the Commissioner of the Department of Commerce and Insurance must conduct annual examinations of pre-need sellers to ensure that each seller will be able to perform its contract with the consumer. The commissioner may also investigate or examine the affairs of any pre-need seller whenever it is deemed appropriate. In addition, every pre-need seller is required to keep and maintain, at a minimum, accurate accounts, books, and records of all pre-need contracts and insurance policy transactions. A trustee is required to keep records of, among other things, the receipt of funds and all disbursements. Pre-need sellers and trustees are required to file an annual report with the commissioner that includes a summary of the information contained in the accounts, books, and records. Contract and trusting requirements. Various contract and trusting requirements exist in the state of Tennessee. Trust-funded, insurance-funded, revocable, irrevocable, guaranteed, and nonguaranteed pre-need funeral plan contracts are all permitted in Tennessee. Prior to its use, each pre-need contract sold must contain certain information and disclosures. Required information or disclosures include (1) a statement as to whether the contract establishes a revocable trust account or an irrevocable trust account, (2) a complete disclosure of the pricing arrangement and of any contingent liabilities or costs of the buyer, and (3) a disclosure that the trustee will pay any balance remaining in the trust fund after payment for the funeral merchandise and services in accordance with the pre-need contract. Pre-need contracts must be filed with and approved by the Commissioner of the Department of Commerce and Insurance. Pre-need sellers are required to trust 100 percent of the sale of funeral goods and services and 120 percent of the cost of cemetery goods. A trustee is required to invest at least 50 percent of the moneys paid and placed in a pre-need funeral contract trust in the following: (1) demand deposits, (2) savings accounts, (3) certificates of deposits, or (4) other accounts issued by financial institutions. A trustee cannot withdraw the funds for any purpose other than payment for merchandise or service. However, the trustee can use income from the trust account to pay applicable taxes and reasonable expenses related to the administration of the trust. Funds deposited in trust under a pre-need funeral contract may, with the written permission of the consumer and written approval of the commissioner, be withdrawn by the trustee and used to purchase an insurance policy. Any funds left over in a trust after services are provided are required to be refunded to the purchaser, the purchaser’s estate or otherwise named beneficiary. Consumer protection accounts. A cemetery consumer protection account and a pre-need funeral consumer protection account exist in Tennessee. According to officials representing the Tennessee Department of Commerce and Insurance, both accounts are funded through a $20 fee assessed on each pre-need contract. Officials explained that for both accounts, half of the funds that are collected are used to support the general operation and expenses for Burial Services and the other half of the funds are used to support any receivership action initiated by the commissioner against a pre-need seller or cemetery in accordance with applicable law. Officials further stated that neither account is used to directly reimburse a consumer whose seller or trustee lost the funds the consumer contributed to a pre-need plan. According to a Tennessee industry association, there should be a true consumer protection account where the funds are set aside for consumer restitution purposes only. Funds invested in pre-need trusts. Tennessee did not provide information on whether the state tracked the amount of funds invested in pre-need plans. Consumer complaints and violations. According to officials representing the Tennessee Department of Commerce and Insurance, they received 21 complaints regarding the death care industry in 2008, 30 in 2009, and 54 in 2010, but they did not break this information down by industry segment. According to state regulators we surveyed in 2003 and 2011, other than rules or regulations that generally apply to all businesses, Tennessee does not have rules or regulations in place that specifically address third party sellers of funeral goods. Although there are no specific rules or regulations, third party sellers are subject to other state laws. In January 2010, a Tennessee court permanently enjoined the defendant from engaging in the sale of cemetery goods and services and revoked any licenses he possessed to engage in the cemetery goods and services business in the state because of his “persistent and knowing violations of the Tennessee Consumer Protection Act.” The defendant was an Internet seller of cemetery goods who had failed to deliver items sold to consumers. Although the exact number of consumers affected and the total dollar amount is unknown, the final judgment of the court indicates that consumers had made 126 complaints to various agencies. The final judgment also concluded that the defendant had made at least 3,600 violations of the Tennessee Consumer Protection Act. The final judgment provided that the state may seek restitution on behalf of consumers and other persons for ascertainable losses. Various changes to Tennessee’s state laws regarding the death care industry have been made since 2003. According to officials representing the Tennessee Department of Commerce and Insurance, Tennessee enacted major revisions of funeral, cemetery, and pre-need laws and regulations in 2007 and 2008. Officials stated that these rewrites were in reaction to a pre-need incident in their state, but that in rewriting their laws, they also tried to be pro-active and address any other issues that could arise. The pre-need incident case involved the looting of about $20 million from pre-need trusts in Tennessee (see more on this incident in app. II). Changes included requiring state or commissioner approval for a (1) change in trustee, (2) cemetery sale, and (3) pre-need contract; and creating a cemetery consumer protection account and a pre-need consumer protection account. In addition, other changes have gone into effect in Tennessee since 2003. All funeral establishments selling agreements, contracts, or plans for pre-need funeral services, including those that are funded through insurance, must be registered with the Commissioner of the Department of Commerce and Insurance and be subject to an annual audit. Cemeteries must maintain a cemetery map detailing the location of interment sites. Cemetery operators must make all reasonable efforts to notify known family or next of kin of a deceased individual if the operator has knowledge that the human remains of the deceased were placed in the wrong burial site. In Wisconsin, the Department of Regulation and Licensing regulates all segments of the death care industry. Within the department, there is a Funeral Directors Examining Board and a Cemetery Board that help regulate the funeral, cemetery, and the pre-need segments. The Funeral Directors Examining Board and the Cemetery Board both have six member positions, two of which are required to be consumer representatives. The department has 4 staff, in addition to 14 staff from the department’s enforcement office who work on death care issues. However, these staff have other responsibilities and are not solely dedicated to death care issues. In Wisconsin, cemeteries are prohibited from being affiliated with funeral homes. Licensing requirements. Funeral directors and embalmers are required to be licensed to operate in the state, which was also was the case in 2003 as reported by the Wisconsin state regulator who responded to our survey. Funeral homes must obtain a permit to operate in the state. Prospective funeral director or embalmer licensees are required, among other things, to (1) pay an application fee, (2) complete 2 academic years of instruction in a recognized college or university, (3) complete 9 months or more of instruction in mortuary science, (4) complete a 1-year apprenticeship, and (5) pass an exam. Funeral directors and embalmers are required to renew their licenses every 2 years. Each licensee must complete 15 hours of approved continuing education during each licensing period. According to a state regulator, in November 2011 there were 514 funeral homes operating in Wisconsin. Inspection and audit requirements. According to the state regulator who responded to our 2011 survey, funeral homes are inspected if the state receives a complaint. According to an official representing a Wisconsin consumer association, the state does little to monitor the industry. Consumer complaints and violations. According to the state regulator who responded to our 2011 survey on the regulation of funeral homes, the state received approximately 81 consumer complaints in 2008, 72 in 2009, and 48 in 2010. Based on data from the Department of Regulation and Licensing, complaints included those related to (1) licensing issues, such as unlicensed practices; (2) unprofessional conduct; and (3) pricing issues, such as being charged an incorrect amount. The state regulator also reported that there were 314 violations since 2008 and that the state issued various enforcement actions that included 13 letters of reprimand, 2 fines, 7 probations, 3 relinquishments, 3 suspensions of licenses, and 2 revocations of licenses. Licensing requirements. Cemeteries are not required to be licensed, but some cemetery operators are required to be registered or licensed to operate in the state. In 2003, as reported by the Wisconsin state regulator who responded to our survey, cemeteries were also not required to be licensed but cemetery operators had to be licensed. Specifically, a cemetery operator who (1) operates a cemetery that is 5 acres or more in size, (2) sells 20 or more cemetery lots or mausoleum spaces during a calendar year or (2) has $100,000 or more in trust fund accounts is required to be licensed. A cemetery operator that (1) operates a cemetery that is less than 5 acres in size, (2) sells fewer than 20 cemetery lots or mausoleum spaces during a calendar year, or (2) has less than $100,000 in trust fund accounts for a cemetery is required to be registered. However, a cemetery operator of a cemetery organized, maintained, and operated by any of the following are exempt from registration or licensing requirements: a town; village; city; church; synagogue or mosque; religious, fraternal or benevolent society; or incorporated college of a religious order. Prospective cemetery operator applicants must pay an application fee for licensing or registration. Licensed or registered cemetery operators are required to renew their licenses every 2 years. According to the state regulator who responded to our 2011 survey on the regulation of cemeteries, there were approximately 480 cemeteries subject to regulation operating in Wisconsin. Officials representing the Wisconsin Department of Regulation and Licensing further stated that the total number of cemeteries in the state is not known, but that the ones that are required to be licensed perform about 60 to 70 percent of all services in the state. Inspection and audit requirements. The Department of Regulation and Licensing has the authority to inspect or audit cemeteries and cemetery authority records, among others, and may do so randomly. According to the state regulator who responded to our 2011 survey, inspections are done when the applicant for licensing first applies and when the state receives a complaint. Licensed or registered cemetery operators are required to submit an annual report to the Department of Regulation and Licensing. In addition, cemeteries are required to provide price lists to consumers. Consumer complaints and violations. According to the state regulator who responded to our 2011 survey on the regulation of cemeteries, the state received approximately 10 consumer complaints in 2008, 15 in 2009, and 3 in 2010. Based on data from the Department of Regulation and Licensing, during this time complaints included those related to (1) price issues, such as overcharging; (2) maintenance concerns; and (3) monument issues, such as placing the incorrect date on a monument. Further, according to the state regulator who responded to our 2011 survey, since 2008, there were approximately 56 violations and that the state took various enforcement actions, including one letter of reprimand, two assessments of investigative costs, one relinquishment of a license, and one revocation of a license. Licensing requirements. Crematory operators are required to register to operate in the state. There are no requirements that a crematory be registered or licensed. To apply for registration, crematory operators are required, among other things, to (1) pay an application fee and (2) provide, among other things, a description of the equipment that will be used. Crematory operators are required to renew their registrations every 2 years. According to the state regulator who responded to our 2011 survey, there were approximately 98 crematories operating in the state. Inspections and audit requirements. According to the state regulator who responded to our 2011 survey, inspections are done if there is a complaint filed with the Department of Regulation and Licensing. In addition, a crematory operator is required to keep records of each cremation performed. Consumer complaints and violations. According to the state regulator who responded to our 2011 survey on the regulation of crematories, the state received approximately one consumer complaint in 2008, none in 2009, and three in 2010. Based on data from the department, complaints received from 2007 through March 2011 included those related to (1) environmental concerns, such as black smoke being emitted from the facility, and (2) not obtaining proper authorization for cremation. Further, according to the state regulator who responded to our survey, since 2008, there were approximately 16 violations, with the most common violation being related to fraud or deceptive practices. Cremation rate. According to the Cremation Association of North America, Wisconsin had a 42 percent cremation rate in 2009. Licensing requirements. Sellers of pre-need plans are required to be licensed to operate in the state, which was also the case in 2003 as reported by the Wisconsin state regulator who responded to our survey. Sellers of pre-need plans must renew their licenses every 2 years. Inspection and audit requirements. Pre-need sellers may be required to file an annual report, which is to include accounting of all amounts deposited and withdrawn from pre-need accounts. The Department of Regulation and Licensing is required to review such reports. Contract and trusting requirements. Various contract and trusting requirements exist in the state of Wisconsin. Trust-funded, insurance-funded, revocable, irrevocable, guaranteed, and nonguaranteed pre-need funeral plan contracts are all permitted in Wisconsin. According to officials representing the Wisconsin Department of Regulation and Licensing, insurance-funded plans are more common in the funeral segment and trust-funded plans are more common in the cemetery segment of the industry. Pre-need sellers are required to trust either an amount equal to at least 40 percent of each payment of principal that is received from the sale of cemetery merchandise under a pre-need sales contract into a pre-need trust fund, or the wholesale cost ratio for the cemetery merchandise multiplied by the amount of the payment of principal that is received, whichever is greater. For the pre-need sale of funeral goods and services, 100 percent of funds must be trusted with either a bank or trust company within the state whose deposits are insured by the Federal Deposit Insurance Corporation, deposited in a savings and loan association or savings bank within the state whose deposits are insured by the Federal Deposit Insurance Corporation, or invested in a credit union within the state whose savings are insured by the national board. For the pre-need sale of funeral goods or services, funds must remain in trust, including interest and dividends, if any, until the death of the potential decedent, unless they are released upon demand to the depositor (the purchaser of the pre-need goods or services) after written notice is provided to the beneficiary (the pre-need seller). A request by a cemetery authority to transfer funds to a different trustee must be approved by the Department of Regulation and Licensing. Consumer protection accounts. According to the Wisconsin state regulator who responded to our survey, no consumer protection fund exists in Wisconsin. Funds invested in pre-need trusts. According to the state regulator who responded to our survey, Wisconsin tracks the amount of money invested in pre-need plans. Wisconsin did not provide data on the amount of money currently invested in these plans. Consumer complaints and violations. Officials representing the Department of Regulation and Licensing told us that they receive few complaints. Third party sales are subject to some degree of regulation, but sellers are not required to be licensed to operate in the state. The state regulatory official we surveyed reported various changes to state laws and regulations regarding the death care industry. As reported by the Wisconsin state regulator who responded to our 2011 surveys, changes included those that imposed stricter licensing requirements and enhanced consumer protections, and these changes moderately strengthened the state’s regulatory program. The state regulator who responded to our survey reported that these changes were a result of proposals from state agencies. Changes included the following bills that were passed. The Cemetery Registration & Consumer Protection Act, passed in 2007, brought more cemeteries under regulation. According to officials representing the Wisconsin Department of Regulation and Licensing, under this act, about 1,200 to 1,500 cemeteries now fall under registration or licensing requirements. Officials compared this to 1991, when only 5 cemeteries were required to be licensed. Officials also stated that the genesis for the legislative expansion was a general recognition by the state legislature that it was appropriate to have more oversight of cemeteries and that the effort to pass the law was spearheaded by a Wisconsin industry association. Assembly Bill 485, passed in 2006, required that except for performing funeral services, the business of a funeral director must be conducted in a funeral establishment that has been issued a permit by the examining board. Assembly Bill 75, passed in 2006, created the Crematory Authority Council and required that crematory authorities be registered. Assembly Bill 100, passed in 2005, created the Cemetery Board as an oversight mechanism for cemeteries. In addition to the contact named above, John Mortin, Assistant Director; Tracey Cross; Dorian Dunbar; Stuart Kaufman; Thomas Lombardi; Jessica Orr; Minette Richardson; and Greg Wilmoth made significant contributions to this report. | Media reports have identified instances of desecration of graves and human remains at cemeteries, and in one instance, reported that bodies were removed from graves and the sites resold. Allegations have also surfaced about the mismanagement of pre-need plans that are designed to provide consumers the opportunity to fund funeral and cemetery arrangements before they are needed. The FTC's Funeral Rule requires that, among other things, funeral providers give consumers lists that disclose the cost of funeral goods and services before they enter into funeral transactions. Proposed legislation introduced in March 2011 would increase the federal governments role in regulating the industry by, among other things, requiring that the FTC regulate aspects of cemetery operations. GAO was asked to review the regulation of the death care industry. This report discusses (1) how federal and state governments regulate the industry and how regulation has changed since 2003 and (2) state regulators' views on the need for additional regulation. GAO reviewed FTC's Funeral Rule and interviewed officials representing the FTC and national industry and consumer associations; surveyed state officials to gather data on state regulation of the death care industry; and, where possible, compared the results of the 2011 surveys with those of similar surveys GAO conducted in 2003. The response rate for our 2011 surveys ranged from 78 to 84 percent. GAO also reviewed laws and regulations. GAO is not making any recommendations in this report. The extent to which the federal and state governments regulate the death care industryfuneral homes, cemeteries, crematories, pre-need funeral plans, and third party sales of funeral goodsvaries, as does the extent to which regulation has changed since GAO last reported on the regulation of the death care industry in 2003. The Federal Trade Commission (FTC) continues to annually conduct undercover shopping at various funeral homes to test compliance with the Funeral Rule. Of the over 2,400 funeral homes that the FTC shopped since 1996, the FTC reported an overall compliance rate of about 85 percent. With respect to state regulation, consistent with GAOs findings in 2003, the way in which states regulate the industry varies across industry segments and states. Also, the extent to which state regulators reported that they had specific rules or regulations for each industry segment in both 2003 and 2011 varied. Most consistent across states in both years was reporting that there were specific rules or regulations for funeral homes (94 and 95 percent in 2003 and 2011, respectively). In contrast, 77 percent of state regulators of cemeteries reported that their states had specific rules or regulations for cemeteries in 2003, and 88 percent reported this in 2011. Certain state regulators also reported that their states made various statutory or regulatory changes since 2003, primarily to clarify legislation or regulation or to enhance consumer protections, and that they believe these changes strengthened their regulatory program to varying degrees. State regulators reported that these changes came about for a variety of reasons, including accounts of desecration of human remains or proposals from state agencies and industry groups. State regulators views on the need for additional federal and state regulation of the industry varied, as shown in the figure below. The FTC provided technical comments, which GAO incorporated where appropriate. |
Although State has not yet formally defined what constitutes a soft target, State Department travel warnings and security officers generally consider soft targets to be places where Americans and other westerners live, congregate, shop, or visit, such as hotels, clubs, restaurants, shopping centers, housing compounds, places of worship, schools, or public recreation events. Travel routes of U.S. government employees are also considered soft targets, based on their history of terrorist attacks. The State Department is responsible for protecting more than 60,000 government employees, and their family members, who work in embassies and consulates abroad in 180 countries. Although the host nation is responsible for providing protection to diplomatic personnel and missions under the 1961 Vienna Convention, State has a variety of programs and activities to further protect U.S. officials and family members both inside and outside of the embassy. Following a terrorist attack that involves serious injury or loss of life or significant destruction of a U.S. government mission, State is required to convene an Accountability Review Board (ARB). ARBs investigate the attack and issue a report with recommendations to improve security programs and practices. State is required to report to Congress on actions it has taken in response to ARB recommendations. As of March 2005, there have been 11 ARBs convened since the board’s establishment in 1986. Concerned that State was not providing adequate security for U.S. officials and their families outside the embassy, the American Foreign Service Association testified on a number of occasions before the Senate Appropriations Subcommittee on Commerce, Justice, State and the Judiciary on the need for State to expand its security measures. The subcommittee, in its 2002 and subsequent reports, urged State to formulate a strategy for addressing threats to locales abroad that are frequented by U.S. officials and their families. It focused its concern about soft targets on schools, residences, places of worship, and other popular gathering places. In fiscal years 2003, 2004, and 2005, Congress earmarked a total of $15 million for soft target protection each year, particularly to address security vulnerabilities at overseas schools. Moreover, in 2005, the Senate appropriations report directed State to develop a comprehensive strategy for addressing the threats posed to soft targets no later than June 1, 2005. State has a number of programs and activities designed to protect U.S. officials and their families outside the embassy, including security briefings, protection at schools and residences, and surveillance detection. However, State has not developed a comprehensive strategy that clearly identifies safety and security requirements and resources needed to protect U.S. official and their families. State officials cited several complex issues involved with protecting soft targets. As the terrorist threat grows, State is being asked to provide ever greater levels of protection to more people in more dangerous locations, and they questioned how far State’s protection of soft targets should extend. They said that providing U.S. government funds to protect U.S. officials and their families at private sector locations or places of worship was unprecedented and raised a number of legal and financial challenges—including sovereignty and separation of church and state— that have not been resolved by the department. State officials also indicated they have not yet fully defined the universe of soft targets— including taking an inventory of potentially vulnerable facilities and areas where U.S. officials and their families congregate—that would be necessary to complete a strategy. Although State has not developed a comprehensive soft target strategy, some State officials told us that several existing programs could help protect soft targets. However, they agreed that these existing programs are not tied together in an overall strategy. State officials agreed that they should undertake a formal evaluation of how existing programs can be more effectively integrated as part of a soft target strategy, and whether new programs might be needed to fill any potential gaps. A senior official with State’s Bureau of Diplomatic Security (DS) told us that in January 2005, DS formed a working group to develop a comprehensive soft targets strategy to address the appropriate level of protection of U.S. officials and their families at schools, residences, and other areas outside the embassy. According to State, the strategy should be completed by June 1, 2005. To identify vulnerabilities in State’s soft target protection, and determine if State had corrected these vulnerabilities, we reviewed the ARB reports conducted after U.S. officials were assassinated outside the embassy. Of the 11 ARBs conducted since 1986, the majority (5) have focused on soft target attacks, compared with attacks against embassies (2) or other U.S. facilities (4). We found that, 17 years after the first soft target ARB, State has still not addressed the vulnerabilities and recommendations identified in that and more recent reports: specifically, the need for hands-on counterterrorism training and accountability mechanisms to promote compliance with personal security procedures. Despite State’s assurances to Congress that it would implement recommendations aimed at reducing these vulnerabilities, we found that State’s hands-on training course is still not mandatory, and procedures to monitor compliance with security requirements have not been fully implemented. We also found that ambassadors, deputy chiefs of mission, and regional security officers were not trained in how to implement embassy procedures intended to protect U.S. officials outside the embassies. Since 1988, State has reported to Congress that it agreed with ARB recommendations to provide counterterrorism training. For example, in 1995, State reported that it “re-established the Diplomatic Security Antiterrorism Course (DSAC) for those going to critical-threat posts to teach surveillance detection and avoidance, and defensive and evasive driving techniques.” In 2003, State reported it agreed with the recommendations that employees from all agencies should receive security briefings and indicated that it would review the adequacy of its training and other personal security measures. Although State implemented the board’s recommendation to require security briefings for all staff, hands-on counterterrorism training is still not mandatory, and few officials or family members have taken DSAC. Senior DS officials said they recognize that security briefings are no longer adequate to protect against current terrorist threats. In June 2004, DS developed a proposal to make DSAC training mandatory. DS officials said that DSAC training should be required for all officials, but that issues such as costs and adequacy of training facilities were constraining factors. As of April 18, 2005, the proposal had not been approved. Although State has agreed on the need to implement an accountability system to promote compliance with personal security procedures since 1988, there is still no such system in place. Beginning in 2003, State has tried to incorporate some limited accountability to promote compliance. However, based on our work at five posts, we found that post officials are following few, if any, of these new procedures. In response to a 2003 ARB, State took a number of steps to improve compliance with State’s personal security procedures for officials outside the embassy. In June 2003, State revised its annual assessment criteria to take personal security into account when preparing performance appraisals, and in December 2003, State revised its Foreign Affairs Manual to mandate and improve implementation of personal security practices. In May 2004, State notified posts worldwide on use of a Personal Security Self-Assessment Checklist to improve security outside the embassy. However, none of the posts we visited were even aware of these and other key policy changes. For example, none of the officials we met with, including ambassadors, deputy chiefs of mission, regional security officers, or staff, were aware that the annual ratings process now includes an assessment of whether staff are following the personal security measures or that managers are now responsible for the reasonable oversight of subordinates’ personal security activities. Furthermore, none of the supervisors were aware of the checklist, and we found no one was using the checklists to improve their personal security practices. In explaining why posts were not aware of the new personal security regulations, DS officials noted that posts were often overwhelmed by work and may have simply missed the cables and changes in the Foreign Affairs Manual. They also noted that changes like this take time to be implemented globally. Furthermore, State’s original plan, to use the checklist as an accountability mechanism, was dropped before it was implemented. In its June 2003 report to Congress on implementation of the 2003 ARB recommendations, State stipulated that staff would be required to use the checklist periodically and that managers would review the checklists to ensure compliance. However, State never implemented this accountability mechanism out of concern it would consume too much staff time. We also found that key officials receive no training on how to promote personal security outside the embassy. According to a number of State officials, improvements in this area must start with the ambassador and the deputy chief of mission. Yet no ambassadors, deputy chiefs of mission, or regional security officers receive any training in how to maximize soft target protection at embassies. DS officials agreed that this critical component should be added to their training curriculum. In response to several congressional committee reports, State began developing a “Soft Targets” program in 2003 to help protect overseas schools against terrorism. The program has four proposed phases. The first two phases are focused on department-sponsored schools that have previously received grant funding from the State Department, and the third and fourth phases focus on the nondepartment-sponsored schools with American students. In phase one, department-sponsored schools were offered funding for basic security hardware such as shatter-resistant window film, two-way radios for communication between the school and the embassy, and public address systems. As of November 19, 2004, 189 department-sponsored schools had received $10.5 million in funding for security equipment in phase one of the program. The second phase provided additional security enhancements, such as perimeter fencing, walls, lighting, gates, and guard booths. As of November 2004, State has obligated over $15 million for phase two security upgrades. For phases three and four, State plans to provide similar types of security upgrades to eligible nondepartment- sponsored schools. The program also funds security enhancements for off-compound embassy employee association facilities, such as recreation centers. Security upgrades include funding for perimeter walls and shatter-resistant window film. In fiscal year 2004, almost $1 million was obligated for these enhancements. Regional Security Officers (RSO) said that identifying and funding for security enhancements at department-sponsored schools were straightforward because of the department’s pre-existing relationship with these schools. However, they said it has been difficult to identify eligible nondepartment-sponsored schools for phase three because of the vast number of schools that might qualify, the lack of any pre-existing relationship, and limited guidance on eligibility criteria. For example, some RSOs questioned how many American students should attend a school for it to be eligible for security upgrades. Some RSOs were considering funding schools with just a few American students. Moreover, one RSO was considering providing security upgrades to informal educational facilities, such as those attended by children of U.S. missionaries. State is trying to determine the appropriate scope of the program, and sent cables to posts in the summer of 2004 asking RSOs to gather data on nondepartment-sponsored schools attended by American students, particularly U.S. government dependents. State officials acknowledged that the process of gathering data has been difficult since there are hundreds of such schools worldwide. According to an Overseas Buildings Operations (OBO) official, as of December 2004, only about 81 out of the more than 250 posts have provided responses regarding such schools. OBO plans to use the data to develop criteria for which schools might be eligible for funding under phase three and, eventually, phase four of the program. In anticipation of any future phases of the Soft Targets program, RSOs have been asked to identify other facilities and areas that Americans frequent, beyond schools and off-compound employee association facilities, that may be vulnerable to a terrorist attack. State Department officials were concerned about the large number of sites RSOs could identify as potential soft target sites, and the department’s ability to protect them. State has a responsibility for providing a secure housing environment for U.S. officials and their families overseas. However, we found that State’s primary program in place to protect U.S. officials and their families at residences, the Residential Security program, is principally designed to deter crime, not terrorism. The program includes basic security hardware and guard service; and as the crime threat increases, the hardware and guard services can be correspondingly increased at the residences. State officials said that while the Residential Security program, augmented by the local guard program, provides effective deterrence against crime, it could provide limited or no deterrence to minimize the risk and consequences of a residential terrorist attack. State officials told us that the best residential scenario for posts is to have a variety of housing options, including apartments and single-family homes, to reduce the potential for a catastrophic attack. To provide greater protection against terrorist attacks, most posts we visited used surveillance detection teams in the residential areas. The program is intended to enhance the embassies’ ability to detect preoperational terrorist surveillance and stop the attack. According to State’s guidance, surveillance detection units are primarily designed to protect embassies, and their use in residential areas is discouraged. However, we found RSOs at some of the posts we visited were routinely utilizing surveillance detection units to cover areas outside the embassies, such as residences, school bus stops and routes, and schools attended by U.S. embassy dependents. RSOs told us that the Surveillance Detection program is instrumental in providing deterrence against potential terrorist attacks, and argued that the current program guidelines are too restrictive. Senior State officials agreed that the use of the surveillance detection in soft target areas could be beneficial, but noted that the program is labor intensive and expensive, and any expansion of the program could require significant funding. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I will be happy to answer any questions you may have. For questions regarding this testimony, please call Diana Glod at (202) 512-8945. Individuals making key contributions to this testimony included Edward George and Andrea Miller. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | U.S. government officials working overseas are at risk from terrorist threats. Since 1968, 32 embassy officials have been attacked--23 fatally--by terrorists outside the embassy. As the State Department continues to improve security at U.S. embassies, terrorist groups are likely to focus on "soft" targets--such as homes, schools, and places of worship. GAO was asked to determine whether State has a strategy for soft target protection; assess State's efforts to protect U.S. officials and their families while traveling to and from work; assess State's efforts overseas to improve security at schools attended by the children of U.S. officials; and describe issues related to protection at their residences. State has a number of programs and activities designed to protect U.S. officials and their families outside the embassy, including security briefings, protection at schools and residences, and surveillance detection. However, State has not developed a comprehensive strategy that clearly identifies safety and security requirements and resources needed to protect U.S. officials and their families abroad from terrorist threats outside the embassy. State officials raised a number of challenges related to developing and implementing such a strategy. They also indicated that they have recently initiated an effort to develop a soft targets strategy. As part of this effort, State officials said they will need to address and resolve a number of legal and financial issues. Three State initiated investigations into terrorist attacks against U.S. officials outside of embassies found that the officials lacked the necessary hands-on training to help counter the attack. The investigations recommended that State provide hands-on counterterrorism training and implement accountability measures to ensure compliance with personal security procedures. After each of these investigations, State reported to Congress that it planned to implement the recommendations, yet we found that State's hands-on training course is not required, the accountability procedures have not been effectively implemented, and key embassy officials are not trained to implement State's counterterrorism procedures. State instituted a program in 2003 to improve security at schools, but its scope has not yet been fully determined. In fiscal years 2003 and 2004, Congress earmarked $29.8 million for State to address security vulnerabilities against soft targets, particularly at overseas schools. The multiphase program provides basic security hardware to protect U.S. officials and their families at schools and some off-compound employee association facilities from terrorist threats. However, during our visits to posts, regional security officers were unclear about which schools could qualify for security assistance under phase three of the program. State's program to protect U.S. officials and their families at their residences is primarily designed to deter crime, not terrorism. The Residential Security program includes basic security hardware and local guards, which State officials said provide effective deterrence against crime, though only limited deterrence against a terrorist attack. To minimize the risk and consequences of a residential terrorist attack, some posts we visited limited the number of U.S. officials living in specific apartment buildings. To provide greater protection against terrorist attacks, some posts we visited used surveillance detection teams in residential areas. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.